top of page

Search Results

57 results found with an empty search

  • Ghana Stock Exchange: Real-Time Prices Web App V1 | Akweidata

    < Back Ghana Stock Exchange: Real-Time Prices Web App V1 A basic web-application to find real time summaries of stocks on Ghana's Stock Exchange (GSE) This Web-app is fully powered by GSE-API: Ghana Stock Exchange API found on http://dev.kwayisi.org/ . Github: https://github.com/akweix/GSE_price_finder Listed Companies and their Tickers on GSE Previous Next

  • Is ignorance truly bliss? | Akweidata

    < Back Is ignorance truly bliss? Investigating the link between the lack of general information and the conception of the economy in Ghana. Project from 2018 I am an avid lover of George Orwell’s book 1984. Within the book, a major concept that plays on is that ignorance is bliss. Hence, people living in poverty (the proles) with limited availability to information on the economy somehow live happily. So, I started to ponder upon how such a concept plays within my continent. Is ignorance truly bliss? With poverty in Africa being so high, is the absence of information about the economy to the general members of the society the cause of social stability in spite of economic turmoil (Asogwa, Eze, & Ezema, 2017)? When the economy is in shambles and the political sphere is in disarray are we still under the misconception that all is well? Does happiness exist in poverty?(Graham, 2011) Research Topic: Is ignorance truly bliss? Investigating the link between the lack of general information and the conception of the economy in Ghana. Is the conception of the present economic condition in a country by members of the public associated with how often they use the internet as a source of news? Data & methods This sample is taking from the Afro – barometer. It depicts 2400 respondents from Ghana answering questions pertaining to their perception of the economy and their use of the internet as a source of news. In relation to the research topic, investigating the use of the internet as a source of news is what is being presented as “general information.” Other sources such as the radio (Fombad, Madeleine, & Jiyane, 2016) or newspapers (Omolewa, 2008) are not suitable sources of information for this project. However, the nature of the internet makes it the best source of estimating ones general knowledge (Stilwell, 2018) The observations from this sample were achieved via surveys. Thus, using the two categorical variables of “Conception of the economy” and “frequency of the use of the internet as a source of news.” According to the Chi – squared test, there is an association between these two variables Results: Univariate Analysis From the graph above, we come to find that 85% of Ghanaians have a negative conception of the economy whereas for positive conception it is only 15%. As depicted above, we come to find that within the rural setting there is a greater positive conception of the economy as compared to the urban setting. Results: Bivariate Analysis Key Findings From the univariate analysis key proportions are brought to light. From the sample, most Ghanaians have a negative conception of the economy. The Urban rural relationship simply sheds light to the relative differences –this graph is a bonus feature. However, we come to find that per or Bivariate Analysis there is an association between “Perception of the economy” and “Use of the internet as a source of Information” It must be said, the association is not extremely strong – although there is still an association Moreover, as the use of the internet increases, the more likely one perceives the Ghanaian economy in a positive manner As the use of the internet decreases, the more likely one perceives the Ghanaian economy in a negative manner Conclusion The overarching question from my research question being Is ignorance bliss appears to be less likely the case in Ghana –using the internet as a source of awareness. I have come to find out that from my sample, the lower one’s usage of the internet as a source of news, the more likely they are to rate the economy negatively. Thus, ignorance is not bliss, but awareness is rather. Why is this the case, contrary to other research works and the 1984 book? Two huge lurking variables are the availability of information and technology in low income households as well as and ones competency with the internet due to their education level (Ajakaiye, Olu, & Kimenyi, 2011). Hence, with a low income and inability to use the internet most likely due to a low education level, such a person is more likely to be living in undesirable conditions, thus, they have a negative conception of the economy. However, this claim arising from the lurking variables can be refuted with the simple logic that this paradox of “Poverty and happiness/bliss” has been proven (Graham, 2011) So why are the findings as such? Why is it so that the higher one’s usage of the internet as a source of news, the more likely they are to have a positive conception of the Ghanaian economy? This is so because the Ghanaian economy is relatively perfoming well. Though there is much poverty and low infrastructure capacity, there is still a significant amount of development. With a strong economic growth and increasing standards of living, the state o the Ghanaian economy is not as bad as many would claim – especially in relation to other low income countries. Hence, the higher one’s use of the internet, the more likely they are to know about the economy, especially in relation to other low income countries. They come to understand that all is not as bad as it seems, hence, tend to have a higher perception of the Ghanaian economy. Thus, the answer to the overarching question Is ignorance bliss? Yes – but only if things are going bad. In the case of Ghana’s economy, things are actually going well, hence, ignorance is not bliss in this case. So, fellow Ghanaians, put the pitchforks down, for no revolts are required here. The Ghanaian economy is doing well and those with access to the best source of information – the internet –are most likely to know this, hence are more likely to have a more positive conception of the economy. This research has introduced another aspect to the age old question of, Is ignorance bliss? By depicting that such is most likely to be the case if and only if the surroundings are in a tumultuous state. References Ajakaiye, Olu, and Mwangi S. Kimenyi. “Higher Education and Economic Development in Africa: Introduction and Overview.” Journal of African Economies 20, no. suppl_3 (August 1, 2011): iii3–13. https://doi.org/10.1093/jae/ejr027 . Asogwa, Brendan Eze, and Ifeanyi Jonas Ezema. “Freedom of Access to Government Information in Africa: Trends, Status and Challenges.” Records Management Journal 27, no. 3 (August 18, 2017): 318–38. https://doi.org/10.1108/RMJ-08-2015-0029 . Fombad, Madeleine C., and Glenrose Veli Jiyane. “The Role of Community Radios in Information Dissemination to Rural Women in South Africa.” Journal of Librarianship and Information Science, September 22, 2016, 0961000616668960. https://doi.org/10.1177/0961000616668960 . Graham, Carol. “Adaptation amidst Prosperity and Adversity: Insights from Happiness Studies from around the World.” The World Bank Research Observer 26, no. 1 (February 1, 2011): 105–37. https://doi.org/10.1093/wbro/lkq004 . OMOLEWA, MICHAEL. “ADULT LITERACY IN AFRICA: THE PUSH AND PULL FACTORS.” International Review of Education / Internationale Zeitschrift Für Erziehungswissenschaft / Revue Internationale de l’Education 54, no. 5/6 (2008): 697–711. Stilwell, Christine. “Information as Currency, Democracy, and Public Libraries.” Library Management 39, no. 5 (April 18, 2018): 295–306. https://doi.org/10.1108/LM-08-2017-0078 . Previous Next

  • Dynamic view of Ghana's Insurance Industry | Akweidata

    < Back Dynamic view of Ghana's Insurance Industry Work in progress Previous Next

  • Sustainability Dimensions of Stocks on the SIX:Render 2 | Akweidata

    < Back Sustainability Dimensions of Stocks on the SIX:Render 2 Quantitatively assessing Brundtland's Dimensions (1987). The case of the SIX Previous Next

  • Scrapping Oil related articles | Akweidata

    < Back Scrapping Oil related articles Run on python via GoogleCollab # Install and set up necessary packages and dependencies !pip install selenium !apt-get update !apt install chromium-chromedriver import sys sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver') from selenium import webdriver from selenium.webdriver.chrome.options import Options from bs4 import BeautifulSoup import pandas as pd # Set up Chrome options for Selenium chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') # Initialize the Chrome WebDriver with the specified options driver = webdriver.Chrome(options=chrome_options) # Fetch the Web Page url = 'https://news.google.com/search?q=oil%20prices' driver.get(url) # Get the page source and close the browser html = driver.page_source driver.quit() # Parse the Web Page using BeautifulSoup soup = BeautifulSoup(html, 'html.parser') articles = soup.find_all('article') # Extract the Necessary Information news_data = [] base_url = 'https://news.google.com' for article in articles: # Extracting the title and link title_link_element = article.find('a', class_='JtKRv', href=True) title = title_link_element.text.strip() if title_link_element else "No Title" link = base_url + title_link_element['href'][1:] if title_link_element else "No Link" # Extracting the date time_element = article.find('time') date = time_element['datetime'] if time_element and 'datetime' in time_element.attrs else time_element.text.strip() if time_element else "No Date" news_data.append([title, link, date]) # Store the Data in a DataFrame df = pd.DataFrame(news_data, columns=['Title', 'Link', 'Date']) csv_file = 'google_news_oil_prices.csv' df.to_csv(csv_file, index=False) # Download the file to your computer (only works in Google Colab) try: from google.colab import files files.download(csv_file) except ImportError: print("The files module is not available. This code is not running in Google Colab.") Future Projects: Relation of frequency of Oil related posts and sustainability risks Relation of frequency of Oil related posts and Stock Prices (General & Oil producing/intensive firms) Updated Code # Install and set up necessary packages and dependencies !pip install selenium !apt-get update !apt install chromium-chromedriver import sys sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver') from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from bs4 import BeautifulSoup import pandas as pd import time from datetime import datetime, timedelta import re # Function to convert various date formats to a standardized format def convert_relative_date(text): current_datetime = datetime.now() current_year = current_datetime.year if 'hour' in text or 'hours' in text: return current_datetime.strftime('%Y-%m-%d') elif 'day' in text or 'days' in text: match = re.search(r'\d+', text) days_ago = int(match.group()) if match else 0 return (current_datetime - timedelta(days=days_ago)).strftime('%Y-%m-%d') elif 'minute' in text or 'minutes' in text: return current_datetime.strftime('%Y-%m-%d') elif 'yesterday' in text.lower(): return (current_datetime - timedelta(days=1)).strftime('%Y-%m-%d') else: try: parsed_date = datetime.strptime(text, '%b %d') return datetime(current_year, parsed_date.month, parsed_date.day).strftime('%Y-%m-%d') except ValueError: return text # Return the original text if parsing fails # Set up Chrome options for Selenium chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') # Initialize the Chrome WebDriver with the specified options driver = webdriver.Chrome(options=chrome_options) # Fetch the Web Page url = 'https://news.google.com/search?q=oil%20prices' driver.get(url) # Scroll the page to load more articles for _ in range(5): # Adjust the range for more or fewer scrolls driver.find_element(By.TAG_NAME, 'body').send_keys(Keys.END) time.sleep(2) # Wait for page to load # Get the page source and close the browser html = driver.page_source driver.quit() # Parse the Web Page using BeautifulSoup soup = BeautifulSoup(html, 'html.parser') articles = soup.find_all('article') # Extract the Necessary Information news_data = [] base_url = 'https://news.google.com' for article in articles: title_link_element = article.find('a', class_='JtKRv', href=True) title = title_link_element.text.strip() if title_link_element else "No Title" link = base_url + title_link_element['href'][1:] if title_link_element else "No Link" time_element = article.find('time') date = time_element.text.strip() if time_element else "No Date" news_data.append([title, link, date]) # Store the Data in a DataFrame df = pd.DataFrame(news_data, columns=['Title', 'Link', 'Date']) # Convert dates to a standardized format for i, row in df.iterrows(): df.at[i, 'Date'] = convert_relative_date(row['Date']) # Save the DataFrame to CSV csv_file = 'google_news_oil_prices.csv' df.to_csv(csv_file, index=False) # Download the file to your computer (only works in Google Colab) try: from google.colab import files files.download(csv_file) except ImportError: print("The files module is not available. This code is not running in Google Colab.") Previous Next

  • Convering Excel to CSV: Web Application | Akweidata

    < Back Convering Excel to CSV: Web Application A basic Web application written in HTML and Javascript to convert excel files to CSV. Github: https://github.com/akweix/excel_to_csv Previous Next

  • Cocoa Production: Ghana and Ivory Coast - 2022 | Akweidata

    < Back Cocoa Production: Ghana and Ivory Coast - 2022 Summary of Cocoa Production in Ghana and Ivory Coast in 2022. Previous Next

  • SustainabilityV4 | Akweidata

    Profit is the only Green : Visualization of Swiss Stocks & SRI portfolios Sustainability BY SEAN AKWEI ANUM Abstract Socially Responsible Investing (SRI), which is increasingly popular, emphasizes social and environmental factors in investment decisions to promote sustainability. In theory, SRI outperforms, especially in the long run. While practitioners typically remain skeptical, this unique return-based sustainability assessment of Swiss Stocks demonstrates SRI’s over performance and ability to promote sustainability. Research Question Is SRI a significant means of promoting Sustainability? Sustainability, per the 1987 United Nations Brundtland Commission, means meeting present needs without compromising future generations, involving social, economic, and environmental aspects. In investments, it translates to SRI, blending social and environmental factors into investment decisions. This project seeks to quantitatively depict these sustainability dimensions for stocks and SRI portfolios on the SIX (Swiss Stock Exchange). Methodology Data Proxies The data required deals with the three sustainability parameters for each stock on the Swiss Exchange: Environmental, Social and Economic. Company Name Environmental Score Social Score Economic Score The Quantitative proxies are as follows: 1. Environmental: An ESG rating with a numeric individual score (pillar) for a firm’s environmental impact; 2. Social: An ESG rating with a numeric individual score for a firm’s Social impact; 3. Economic: a risk-adjusted measure of the firm’s expected return: Capital Asset Pricing Model (CAPM) Visualizing Three Parameters A 3D plot was chosen to visualize three quantitative parameters, effectively showing their relationship and intersections. Stocks with high environmental, social, and economic scores are classified as “Sustainable,” while those with low scores are deemed “At Risk.” Stocks with scores between these extremes are categorized as “Acceptable.” The final visualization seeks to visualize the relative distribution of individual stocks & SRI portfolios regarding the three parameters. Hence, a standardized score for each parameter was used. The logic was to ensure that all parameters could be drawn down to a somewhat “equal” scale, thus ensuring an informative visual effect. Constructing SRI Portfolios Using the collected individual stock data, four SRI funds were created: Negative Screening: This SRI fund excludes investments in companies or sectors that do not meet specific ethical, environmental, or social criteria. Best in class: This fund selects companies that outperform their peers in environmental, social, and governance (ESG) criteria within each sector. Thematic Approach: This fund focuses on specific sustainability themes or sectors, such as renewable energy or social justice. ESG integration: This fund incorporates ESG factors into traditional financial analysis to identify risks and opportunities not captured by conventional methods. Data Sources Data was collected for each of the three parameters. Data was attained via the Thompson Reuters financial market portal Refinitiv Eikon. Environmental Pillar Score (ESG rating) Measures a company’s impact on living and non-living natural systems, including the air, land and water, as well as complete ecosystems. Social Pillar Score (ESG Rating) Measures a company’s capacity to generate trust and loyalty with its workforce, customers and society through its use of best management practices. Economic Pillar Score ( Beta) A measure of how much the stock moves for a given move in the market. Note, the Economic score was further computed with the Capital Asset Pricing Model (CAPM), which is CAPM = Risk-free rate+Beta*(Risk Premium), where risk-free rate and risk premium in Switzerland is 1.135% Source: World Government Bonds and 5.5% Source: NYU respectively. Data was collected based on completeness. As such, despite the SIX listing 250 stocks, the project at hand uses 187. One stock, IGEA Pharma NV, was excluded as it was an extremely negative outlier that terribly affected the scale of the entire visualization. Constructing “Sustainable” and “At Risk Criteria” The Sustainability Criterion was defined as Environmental Score ≥ 70 (out of 100), Social ≥ 70 (out of 100); and Economic score ≥ 6.64% (Average Market Return). Consequently, the standardized scores were 1.05, 0.83 and 0, respectively. At Risk Criterion was defined as : Environmental Score ≤ 30 (out of 100); Social ≤ 30; and Economic score ≤ 3.34% (one standard deviation below Market Average Return). Consequently, the standardized scores were -0.30, -0.68 and -1 respectively. Conditions are based on core financial theories. Data for SRI Portfolios Regarding the Negative Screening and Best in Class Approach, using the ESG data collected, I easily constructed said portfolios. However, for the Thematic Approach and ESG integration, I replicated existing funds employing these strategies. They are the “Ethos Swiss Governance Index Large” and the “ETHOS II - Ethos Swiss Sustainable Equities -A” respectively. Final Visualization The graph is interactive. Average-sized points represent a stock on the Swiss Exchange. The bigger Orange points represent SRI portfolios, and the Big Black point represents the Market Average. Results and Conclusion Market’s Performance The sustainability cuboid includes 11% of stocks and three-quarters of SRI strategies, whereas the at-risk quadrant contains 6% of stocks. The general market performance is deemed acceptable, with many stocks nearing the Sustainability cuboid. Despite needing substantial progress, these findings indicate a promising trend towards sustainability in the Swiss Stock Market. SRI Performances To answer the Research Question, SRI funds appear to promote sustainability. This is supported by the visualization showing 3 out of 4 strategies as sustainable. Contrary to expectations, “ESG Integration” is the only strategy classified as non-sustainable. In theory, the best strategy should be “ESG integration”, whereas the other three are seen as simplistic and lacking a nuanced ESG assessment concerning market returns. My paradoxical result likely arises because, unlike simpler strategies, “ESG Integration” involves more subjective and active management, leading to significant performance variations among different managers. Testing this hypothesis with another fund using “ESG Integration” yielded a “Sustainable result”, highlighting the classic debate between active and passive management but now within SRI. Concluding Remarks Ironically, firms with controversial reputations like Nestle, UBS, and Credit Suisse have good non-economic scores, while Cantonal banks unexpectedly show low scores. This raises questions about how these public entities might be causing more social and environmental harm and calls for a deeper examination of the legitimacy of ESG scores.

  • Google News Scrapper | Akweidata

    < Back Google News Scrapper Scrape Google News articles for a particulair keyword and date range You can use the google_news_scraper function by providing the keyword and date range as inputs. For example, google_news_scraper("oil prices", "2023-08-25", "2023-08-31") will fetch articles with the keyword "oil prices" published between August 25 and 31, 2023, and save it as a CSV file. # Install necessary packages !pip install selenium !apt-get update !apt install chromium-chromedriver import sys import pandas as pd from datetime import datetime, timedelta import re from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from bs4 import BeautifulSoup import time def convert_relative_date(text, current_datetime): current_year = current_datetime.year if 'hour' in text or 'hours' in text: return current_datetime.strftime('%Y-%m-%d') elif 'day' in text or 'days' in text: match = re.search(r'\d+', text) days_ago = int(match.group()) if match else 0 return (current_datetime - timedelta(days=days_ago)).strftime('%Y-%m-%d') elif 'minute' in text or 'minutes' in text: return current_datetime.strftime('%Y-%m-%d') elif 'yesterday' in text.lower(): return (current_datetime - timedelta(days=1)).strftime('%Y-%m-%d') else: try: parsed_date = datetime.strptime(text, '%b %d') return datetime(current_year, parsed_date.month, parsed_date.day).strftime('%Y-%m-%d') except ValueError: return text # Return the original text if parsing fails def google_news_scraper(keyword, start_date, end_date): # Convert start_date and end_date to datetime objects start_date = datetime.strptime(start_date, '%Y-%m-%d') end_date = datetime.strptime(end_date, '%Y-%m-%d') # Set up Chrome options for Selenium chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver') # Initialize the Chrome WebDriver with the specified options driver = webdriver.Chrome(options=chrome_options) # Fetch the Web Page query = '+'.join(keyword.split()) url = f'https://news.google.com/search?q={query}' driver.get(url) # Scroll the page to load more articles for _ in range(5): # Adjust the range for more or fewer scrolls driver.find_element(By.TAG_NAME, 'body').send_keys(Keys.END) time.sleep(2) # Wait for page to load # Get the page source and close the browser html = driver.page_source driver.quit() # Parse the Web Page using BeautifulSoup soup = BeautifulSoup(html, 'html.parser') articles = soup.find_all('article') # Extract the Necessary Information news_data = [] base_url = 'https://news.google.com' for article in articles: title_link_element = article.find('a', class_='JtKRv', href=True) title = title_link_element.text.strip() if title_link_element else "No Title" link = base_url + title_link_element['href'][1:] if title_link_element else "No Link" time_element = article.find('time') date = time_element.text.strip() if time_element else "No Date" news_data.append([title, link, date]) # Store the Data in a DataFrame df = pd.DataFrame(news_data, columns=['Title', 'Link', 'Date']) # Convert dates to a standardized format current_datetime = datetime.now() for i, row in df.iterrows(): if row['Date']: df.at[i, 'Date'] = convert_relative_date(row['Date'], current_datetime) # Filter the DataFrame by the provided date range def is_valid_date(date_str): try: return start_date <= datetime.strptime(date_str, '%Y-%m-%d') <= end_date except (TypeError, ValueError): return False filtered_df = df[df['Date'].apply(is_valid_date)] # Save the filtered DataFrame to CSV csv_file = f'google_news_filtered_{query}.csv' filtered_df.to_csv(csv_file, index=False) print(f"Filtered articles saved to {csv_file}") # Check if running in an environment that supports file download try: from google.colab import files files.download(csv_file) except ImportError: print(f"Download not supported in this environment. Please manually retrieve the file: {csv_file}") # Prompt user for input keyword = input("Enter the search keyword: ") start_date = input("Enter the start date (YYYY-MM-DD): ") end_date = input("Enter the end date (YYYY-MM-DD): ") # Call the function with user input google_news_scraper(keyword, start_date, end_date) Project Github repository: https://github.com/seanxjohn/google_news_scrapper/tree/main Previous Next

bottom of page