top of page

Search Results

56 results found with an empty search

  • Cocoa Production: Ghana and Ivory Coast - Historic Trend | Akweidata

    < Back Cocoa Production: Ghana and Ivory Coast - Historic Trend Work in progress Previous Next

  • Initial margin requirement for Derivative Trading | Akweidata

    < Back Initial margin requirement for Derivative Trading A simplified VaR-based approach to calculate the initial margin requirement for Derivative Trading Previous Next

  • Electricity Consumption as a proxy of production: Draft 1 | Akweidata

    < Back Electricity Consumption as a proxy of production: Draft 1 Using publicly available data on Swiss Power Consumption, this exploration seeks to identify an association with power consumption and select firms output https://www.swissgrid.ch/en/home/operation/grid-data/current-data.html#wide-area-monitoring https://www.ewz.ch/en/about-ewz/newsroom/current-issues/electricity-shortage/city-zurich-energy-consumption.html https://data.stadt-zuerich.ch/dataset/ewz_stromabgabe_netzebenen_stadt_zuerich https://data.stadt-zuerich.ch/group/energie Work in Progress Previous Next

  • Game Theory: Prisoner's Dilemma Strategies Tools | Akweidata

    < Back Game Theory: Prisoner's Dilemma Strategies Tools Recreating and Simulating Robert Axelrod's 1980 Computer Tournament. Previous Next

  • Dynamic View of Ghana's Unemployment | Akweidata

    < Back Dynamic View of Ghana's Unemployment Investigating the trend and segmentation of employment in Ghana Previous Next

  • Dynamic view of Ghana's Insurance Industry | Akweidata

    < Back Dynamic view of Ghana's Insurance Industry Work in progress Previous Next

  • Alternative Data Regressor: V1 | Akweidata

    < Back Alternative Data Regressor: V1 A Python Program to attain a linear regression of some alternative data against financial asset prices . A CSV file is the input. The output is the regression results. The provided Python program is designed to process time series data from a CSV file and execute a series of analytical steps based on a predefined decision tree. Key functionalities include: Reading a CSV File : The user inputs the path to a CSV file, which the program reads into a DataFrame. Stationarity Testing : It tests the time series data for stationarity using the Augmented Dickey-Fuller test. Adjusting for Non-Stationarity : If the data is non-stationary, it applies a log transformation to stabilize the time series. Re-testing for Stationarity : After transformation, it retests the data for stationarity. Significance Testing : Conducts an Ordinary Least Squares (OLS) regression to test the significance of the relationship between the time series and a dependent variable. Model Development and Evaluation : If a significant relationship is found, the program proceeds to develop a baseline regression model, which is then refined and evaluated based on its R-squared value. Output : The program outputs the results of the stationarity tests, significance tests, and the R-squared value of the regression model. import pandas as pd import numpy as np from statsmodels.tsa.stattools import adfuller from statsmodels.regression.linear_model import OLS import statsmodels.api as sm from scipy import stats import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score def test_stationarity(timeseries): # Perform Dickey-Fuller test: dftest = adfuller(timeseries, autolag='AIC') return dftest[1] # p-value def adjust_non_stationarity(data): # Adjusting for non-stationarity (example: log transformation) return np.log(data) def significance_testing(X, y): # Perform significance testing (example: OLS regression) X = sm.add_constant(X) # adding a constant model = OLS(y, X).fit() return model.pvalues def main(): # Load data file_path = input("Enter the path to your CSV file: ") df = pd.read_csv(file_path) # Assuming the time series column is named 'timeseries' timeseries = df['timeseries'] # Step 1: Test for Stationarity if test_stationarity(timeseries) > 0.05: # Step 2: Adjust Data for Non-Stationarity timeseries = adjust_non_stationarity(timeseries) # Step 3: Re-test for Stationarity if test_stationarity(timeseries) > 0.05: print("Data is still non-stationary after transformation. Ending process.") return else: print("Data is stationary after transformation. Proceeding with analysis.") else: print("Data is stationary. Proceeding with analysis.") # Step 4: Significance Testing # Assuming another column 'dependent_var' as the dependent variable pvalues = significance_testing(df[['timeseries']], df['dependent_var']) if any(pval < 0.05 for pval in pvalues[1:]): # Ignoring the constant's p-value print("Significant correlation found. Proceeding to model development.") else: print("No significant correlation found. Ending process.") return # Steps 5, 6, 7: Develop, Refine, and Evaluate Regression Model # This is a simplified example using OLS regression X_train, X_test, y_train, y_test = train_test_split(df[['timeseries']], df['dependent_var'], test_size=0.2, random_state=0) model = OLS(y_train, sm.add_constant(X_train)).fit() predictions = model.predict(sm.add_constant(X_test)) print("Model R-squared:", r2_score(y_test, predictions)) # Step 8: Interpret the Regression Line # This step is more analytical and depends on the specific model and data # Step 9: Comparative Analysis if __name__ == "__main__": main() Previous Next

  • Plain Vanilla Bond Price Calculator | Akweidata

    < Back Plain Vanilla Bond Price Calculator A web application that takes the arguments of FV, Coupon Rate, YTM and Periods to price a Plain Vanilla Bond Github: https://github.com/akweix/vanilla_bond_price_calculator Previous Next

  • Cocoa Production: West Africa - 2022 | Akweidata

    < Back Cocoa Production: West Africa - 2022 Work in progress Previous Next

  • Manipulating File Paths: Backward to Foward Slashes | Akweidata

    < Back Manipulating File Paths: Backward to Foward Slashes A program made to convert backward slashes in file path names to foward slashes. Targeted for Windows users when copying paths to R or Pthon. Previous Next

  • Data Visualization of the Dynamic Efficiency of Oil and Gas Production in Ghana | Akweidata

    < Back Data Visualization of the Dynamic Efficiency of Oil and Gas Production in Ghana A comprehensive tool for understanding the Real-time Efficiency of Oil and Gas production in Ghana https://akweix.shinyapps.io/trial_app/ Welcome to my R Shiny web app, the "Data Visualization of the Dynamic Efficiency of Oil and Gas Production in Ghana.” This web app leverages a myriad of data science techniques, including interactive visualizations, machine learning, sentiment analysis, natural language processing, data analytic tools and web scraping, to provide real-time, comprehensive analysis of Ghana’s oil and gas sector. The goal is to enhance information efficiency, market efficiency, and resource management efficiency, making it a valuable tool for practitioners, academics, and policymakers alike. The application is primarily centred on Ghana, especially regarding the visualizations. However, the data analytic tools developed can be applied to all markets and regions. Additionally, despite the application presenting key insights and tools that are applicable to both the Oil and Gas industry, greater emphasis was placed on Oil production due to its overall greater share of Ghana’s energy market and its more dynamic nature. anum_sean_data_science_final_report .pdf Download PDF • 2.66MB Previous Next

  • frankenstein.io - Draft 1 | Akweidata

    < Back frankenstein.io - Draft 1 Restructuring & Simplifying "Frankenstein codes" With the emergence of LLMs alongside the traditional sources of community shared codes (Stackoverflow, Github, etc), most coding projects are nothing more than Frankenstein codes - code generated by picking up sections from various sources and combining them together into one project - notoriously common for unskilled programmers or coders wishing to maximize productivity. "Frankenstein.io" is an AI-powered web application designed for coders. It takes full code submissions and, using guided objectives, logically restructures the code to simplify it while ensuring the output remains exactly the same as the original. Additionally, the application includes a citation tool that tracks and cites open-source contributions and other sources from which the code is derived. Logical Breakdown: Code Simplification Frankenstein.io allows users to input full code submissions, which are then analyzed and restructured using advanced AI algorithms. The primary goal is to ensure that the simplified code produces the same results as the original. Additionally, the platform provides a detailed narrative explaining the changes made during the simplification process. Citation Tool To enhance transparency and accountability, Frankenstein.io tracks the sources of code snippets used in the simplification process. It automatically generates citations for open-source contributions and displays these citations alongside the simplified code. Technology Stack Frontend The frontend of Frankenstein.io is built using HTML, CSS, and JavaScript, with frameworks like React.js or Vue.js. For a more robust structure, Next.js or Nuxt.js frameworks are employed. Backend The backend is developed using Node.js with Express.js or Python with Flask/Django. AI and ML models, such as TensorFlow or PyTorch, are used for code analysis and restructuring. The database management relies on MongoDB or PostgreSQL. AI/ML Natural Language Processing (NLP) is utilized for understanding code objectives, while machine learning models identify and apply code simplifications. Version Control & Deployment For version control, GitHub or GitLab is used. The application is containerized using Docker and orchestrated with Kubernetes. Deployment is managed on platforms like AWS, Google Cloud, or Azure. Development Plan Research & Planning Initial steps include conducting market research to understand the target audience's needs and defining the specific objectives and requirements for the AI algorithms. Planning the user interface and user experience is also crucial. Design & Prototyping Wireframes and mockups for the web application are created, followed by the design of the UI/UX to ensure a seamless user experience. AI Algorithm Development This phase involves developing and training AI models to analyze and simplify code, as well as testing these models on various code samples to ensure accuracy. Backend Development Setting up the server environment and database is essential, along with developing API endpoints for code submission, analysis, and retrieval. Frontend Development The user interface is developed based on the designs, implementing features for code submission, viewing simplified code, and displaying citations. Integration & Testing AI models are integrated with the backend, followed by thorough testing to ensure the application works as expected. User testing is also conducted to gather feedback. Deployment & Maintenance The application is deployed to a production environment, with continuous monitoring for any issues and regular maintenance and updates. User Flow Code Submission Users can submit their code through a user-friendly interface, allowing them to provide guided objectives for the AI to focus on. Objective Setting Users specify what they want the AI to focus on during the code analysis. Code Analysis The AI analyzes the code, restructures it, and ensures the output remains the same. Simplified Code Display Users can view the simplified code along with a narrative explaining the changes. Citation Display Citations for code sources are displayed alongside the simplified code. Potential Challenges Ensuring the AI can handle various coding languages and styles is a significant challenge, along with maintaining the accuracy and reliability of the AI's code simplification. Properly attributing sources and managing citations to avoid plagiarism is also crucial. Previous Next

bottom of page