Skip to content
Snippets Groups Projects
Commit 4f260181 authored by Christian Boulanger's avatar Christian Boulanger
Browse files

uploading project

parent a8a26863
Branches
No related tags found
No related merge requests found
Showing
with 3386 additions and 92 deletions
# experiments
# Code experiments
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlab.gwdg.de/boulanger/experiments.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.gwdg.de/boulanger/experiments/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
This repo contains various small projects with small experiments
\ No newline at end of file
KB_HOST='biblio-p-db03.fiz-karlsruhe.de'
KB_DB='kbprod'
KB_PORT=6432
KB_USER='username'
KB_PASS='password'
CROSSREF_EMAIL='youremail@yourdomain.org'
OPENAI_API_KEY=''
\ No newline at end of file
../.idea
!.gitignore
!.env.dist
\ No newline at end of file
zdb-*.json
This diff is collapsed.
This diff is collapsed.
import os
import requests
from dotenv import load_dotenv
load_dotenv()
def get_dois_years(issn):
url = f"https://api.crossref.org/journals/{issn}"
email = os.getenv('CROSSREF_EMAIL')
headers = {'User-Agent': f'Python-requests/2.23.0 (mailto:{email})'}
response = requests.get(url, headers=headers)
# Check if the request was successful
if response.status_code == 404:
return None
data = response.json()
# Extract the array of [year, number of DOIs]
years_info = data.get("message", {}).get("breakdowns", {}).get("dois-by-issued-year", [])
if not years_info:
return ""
# Sort the years_info by year
years_info.sort(key=lambda x: x[0])
# Initialize variables for tracking
years_list = []
start_year = None
last_year = None
for year, _ in years_info:
if start_year is None:
# This is the first year in a sequence
start_year = year
last_year = year
elif year == last_year + 1:
# This year is a continuation of the sequence
last_year = year
else:
# This year is not a continuation; close the previous sequence and start a new one
years_list.append(f"{start_year}-{last_year}")
start_year = year
last_year = year
# Add the last sequence to the list
years_list.append(f"{start_year}-{last_year}")
return ", ".join(years_list)
# written with help from ChatGPT-4 and using code from the googlesearch python module
# (see https://github.com/Nv7-GitHub/googlesearch)
import requests
import re
import random
from requests.exceptions import HTTPError, ConnectionError
from time import sleep
from bs4 import BeautifulSoup
from pypdf import PdfReader
from io import BytesIO
def get_useragent():
return random.choice(_useragent_list)
_useragent_list = [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36',
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.62',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/111.0'
]
def _req(term, results, lang, start, proxies, timeout, retry=3, delay_before_retry=5, filetype: str = "html"):
while retry > 0:
try:
resp = requests.get(
url="https://www.google.com/search",
headers={
"User-Agent": get_useragent()
},
params={
"q": term,
"num": results + 2, # Prevents multiple requests
"hl": lang,
"start": start,
"filetype": filetype
},
proxies=proxies,
timeout=timeout
)
resp.raise_for_status()
return resp
except ConnectionError as err:
print(f'Connection error: {err}. Retrying in {delay_before_retry} seconds.')
retry -= 1
sleep(delay_before_retry)
raise ConnectionError("Too many retries.")
def _search(term, num_results=10, lang="en", proxies=None, sleep_interval=0, timeout=5, retry=3, delay_before_retry=5):
"""The Google search implementation """
escaped_term = term.replace(" ", "+")
# Fetch
start = 0
while start < num_results:
# Send request
resp = _req(escaped_term, num_results - start, lang, start, proxies, timeout, retry=retry,
delay_before_retry=delay_before_retry)
# Parse
soup = BeautifulSoup(resp.text, "html.parser")
result_block = soup.find_all("div", attrs={"class": "g"})
found_valid_result = False
for result in result_block:
# Find link, title, description
link = result.find("a", href=True)
title = result.find("h3")
description_box = result.find("div", {"style": "-webkit-line-clamp:2"})
if description_box:
description = description_box.text
if link and title and description:
start += 1
found_valid_result = True
yield link["href"]
break
if not found_valid_result:
break
sleep(sleep_interval)
def run_google_search(query, num_results=3, lang="en", sleep_interval=4, exclude: list = None, timeout=10,
proxy=None, verbose: bool = False) -> list:
"""
Fetches and returns search results for a given query.
This function sends a request to a search engine using the specified parameters and retrieves the text content of
the search results. It avoids URLs listed in the 'exclude' parameter.
Parameters:
- query (str): The Google search query.
- num_results (int, optional): The number of search results to retrieve. Defaults to 3.
- lang (str, optional): The language for the search results. Defaults to "en" (English).
- sleep_interval (int, optional): The delay in seconds between requests of additional result pages
to avoid hitting rate limits. Defaults to 2 seconds.
- max_char (int, optional): The maximum number of characters to fetch from the search results. Defaults to 8000.
- exclude (list, optional): A list of regular expressions representing URLs to exclude from the search results.
- timeout (int, optional): the timeout for the http query
Returns:
- A list of urls of length `num_results`
"""
if verbose:
print(f'Searching google for {query}...')
proxies = None
if proxy:
if proxy.startswith("https"):
proxies = {"https": proxy}
else:
proxies = {"http": proxy}
urls = _search(query, num_results=num_results, lang=lang, sleep_interval=sleep_interval, timeout=timeout,
proxies=proxies)
if exclude is None:
exclude = []
return [url for url in urls if not any(re.search(e, url) for e in exclude)]
def clean_text(text):
# Remove extra whitespace and empty lines
cleaned_text = "\n".join(line.strip() for line in text.splitlines() if line.strip())
return cleaned_text
def download(url: str, max_char: int = 8000, timeout: int = 10, retry: int = 3, delay_before_retry: int = 5,
verbose: bool = False) -> str | None:
if verbose:
print(f'Downloading content of {url}...')
_retry = retry
while _retry > 0:
try:
response = requests.get(
url=url,
headers={
"User-Agent": get_useragent()
},
timeout=timeout)
response.raise_for_status()
content_type = response.headers.get('Content-Type', '').split(';')[0] # Extract the primary content type
match content_type:
case 'application/pdf':
reader = PdfReader(BytesIO(response.content))
first_page_content = reader.pages[0].extract_text()
return first_page_content[:max_char]
case 'text/html':
soup = BeautifulSoup(response.text, "html.parser")
full_text = soup.get_text()
full_text = clean_text(full_text)
return full_text[:max_char]
case _:
print(f"Cannot parse {content_type}")
return None
except ConnectionError as err:
print(f'Connection error: {err}. Retrying in {delay_before_retry} seconds.')
_retry -= 1
sleep(delay_before_retry)
except HTTPError as err:
if err.response.status_code == 403:
print("Got 403 (Forbidden) error. Skipping...")
return None
raise RuntimeError(f"Error: {err.response.status_code} {err.response.reason}:\n{err.response.text}")
# written by ChatGPT4
from dotenv import load_dotenv
load_dotenv()
import os
import psycopg2
import pandas as pd
def run_query(query, parameters=None):
# Check if query is a path to a SQL file
if os.path.isfile(query) and query.endswith('.sql'):
with open(query, 'r') as file:
query = file.read()
# Retrieve database connection info from the environment variables
host = os.environ.get('KB_HOST')
dbname = os.environ.get('KB_DB')
port = os.environ.get('KB_PORT')
user = os.environ.get('KB_USER')
password = os.environ.get('KB_PASS')
# Connect to the database
conn = psycopg2.connect(
host=host,
dbname=dbname,
port=port,
user=user,
password=password
)
# Create a cursor object
cursor = conn.cursor()
# Execute the query
cursor.execute(query, parameters)
# If it's a SELECT query, fetch the results and display them
if query.strip().lower().startswith('select'):
# Fetch the results
result = cursor.fetchall()
# Get column names
column_names = [desc[0] for desc in cursor.description]
# Create a pandas DataFrame for better display in Jupyter Notebook
df = pd.DataFrame(result, columns=column_names)
display_result = df
else:
# Commit the changes if it's not a SELECT query
conn.commit()
display_result = None
# Close the cursor and connection
cursor.close()
conn.close()
return display_result
# written with the help of ChatGPT4
import requests
import time
import json
from requests import Response
def _query_lobid_api(url, params: dict = None, headers: dict = None):
for attempt in range(4): # Try up to four times (initial + 3 retries)
try:
response: Response = requests.get(url, params=params, headers=headers, timeout=10)
response.raise_for_status() # This will raise an HTTPError if the HTTP request returned an unsuccessful status code
result = response.json()
with open('tmp/lobid-result.json', "w", encoding="utf-8") as f:
json.dump(result, f, indent=2)
return result
except requests.exceptions.HTTPError as http_err:
raise Exception(f"HTTP error occurred: {http_err}") from None
except requests.exceptions.Timeout:
if attempt == 3: # Give up after 3 retries
raise Exception("Maximum retry attempts reached after timeout") from None
time.sleep(2) # Wait for 2 seconds before retrying
except requests.exceptions.RequestException as err:
raise Exception(f"Error occurred during request: {err}") from None
except ValueError as json_err:
raise Exception(f"JSON decoding error: {json_err}\nResponse text: {response.text}") from None
return None # In case the loop completes without returning
def run_query(query: str, fields: list = None):
# clean up query
query = query.strip().replace("\n", " ")
# Endpoint for the lobid.org API
url = 'http://lobid.org/resources/search'
# prepare and run query
params = {
'q': query,
'format': "json"
}
headers = {
'Accept': 'application/x-jsonlines'
}
result = _query_lobid_api(url, params, headers)
member = result['member']
return [({k: _dig(r, k) for k in fields} if fields else r) for r in member]
def _dig(dictionary, path, *additional_keys):
# Split the path if it's a string, otherwise use it as-is
keys = path.split('.') if isinstance(path, str) else [path]
# Add any additional keys from *additional_keys
keys.extend(additional_keys)
current_level = dictionary
for key in keys:
if isinstance(current_level, dict) and key in current_level:
current_level = current_level[key]
elif isinstance(current_level, list) and key.isdecimal():
current_level = current_level[int(key)]
else:
return None
return current_level
def get_resource(resource_id: str, fields: list = None):
if resource_id.startswith("http"):
resource_id = resource_id.replace('#', '').replace('!', '')
url = f'{resource_id}.json'
else:
url = f'http://lobid.org/resources/{resource_id}.json'
# run query and return (filtered) result
result: dict = _query_lobid_api(url)
return ({k: _dig(result, k) for k in fields}) if fields else result
#%%
# written with help from ChatGPT-4
import requests
import json
import os
import time
import re
from dotenv import load_dotenv
load_dotenv()
def query_openai_api(model, instruction, user_input, max_tokens=1000,
temperature=1, top_p=1, frequency_penalty=0,
presence_penalty=0, timeout=60, max_retries=3, retry_delay=5, verbose=False):
"""
Send a request to the OpenAI API with automatic message composition.
Args:
model (str): The model to use for the request (e.g., 'gpt-3.5-turbo').
instruction (str): The instruction or context for the system message.
user_input (str): The user's input message.
max_tokens (int, optional): The maximum number of tokens to generate. Defaults to 8192.
temperature (float, optional): The sampling temperature to use. Defaults to 1.
top_p (float, optional): The nucleus sampling (top-p) parameter. Defaults to 1.
frequency_penalty (float, optional): The frequency penalty parameter. Defaults to 0.
presence_penalty (float, optional): The presence penalty parameter. Defaults to 0.
timeout (int, optional): The maximum number of seconds to wait for an answer from the API. Defaults to 60
max_retries (int, optional): The maximum number of retry attempts. Defaults to 3.
retry_delay (int, optional): The delay in seconds before retrying a request. Defaults to 5.
Returns:
str: The response from the model
Throws:
ValueError: If the no API key has been set in the environment variables or any of the parameters are of the wrong type
RuntimeError if the request fails after the given number of retries or a different GPT-4 related error occurred
"""
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise ValueError("API key not found. Please check your .env file.")
if not model.lower().startswith("gpt"):
raise ValueError("The provided model is not compatible with the OpenAI chat API.")
if type(instruction) is not str or type(user_input) is not str:
raise ValueError("Instruction and user input must be a string")
if model == "gpt-4":
url = "https://api.openai.com/v1/chat/completions"
messages = [
{"role": "system", "content": instruction},
{"role": "user", "content": user_input}
]
data = {
"model": model,
"messages": messages,
"max_tokens": max_tokens,
"temperature": temperature,
"top_p": top_p,
"frequency_penalty": frequency_penalty,
"presence_penalty": presence_penalty
}
elif model.startswith("gpt-3.5-"):
url = "https://api.openai.com/v1/engines/" + model + "/completions" # Modify this URL if needed
prompt = f"{instruction}\n\n{user_input}"
data = {
"prompt": prompt,
"max_tokens": max_tokens,
"temperature": temperature,
"top_p": top_p,
"frequency_penalty": frequency_penalty,
"presence_penalty": presence_penalty
}
else:
raise ValueError("Unsupported model. Please use 'gpt-4' or a model starting with 'gpt-3.5-'.")
json_data = json.dumps(data)
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
for attempt in range(1, max_retries + 1):
try:
response = requests.post(url, data=json_data, headers=headers, timeout=timeout)
if response.status_code == 200:
content = response.json()
if model == "gpt-4":
message = content['choices'][0]['message']
return message['content']
elif model.startswith("gpt-3.5-"):
return content['choices'][0]['text']
else:
error_info = response.json().get('error', {})
if response.status_code == 429 and error_info.get('code') == 'rate_limit_exceeded':
retry_after_match = re.search(r"Please try again in (\d+(\.\d+)?)s", error_info['message'])
if retry_after_match:
retry_after = float(retry_after_match.group(1))
if verbose:
print(f"Rate limit exceeded. Retrying after {retry_after} seconds.")
time.sleep(retry_after)
continue # Continue the loop to retry the request
# Check if token budget is exhausted
if response.status_code == 429 and error_info.get('code') == "insufficient_quota":
raise RuntimeError("OpenAI token budget has been exhausted.")
# Other error
raise RuntimeError(f"Error response from OpenAI: {response.text}")
except requests.exceptions.RequestException:
if verbose:
print(f"Request timeout or other error. Attempt {attempt} of {max_retries}")
if attempt < max_retries:
time.sleep(retry_delay)
raise RuntimeError("Too many retries.")
# written by ChatGPT4
import requests
import time
def run_query(query):
url = 'https://query.wikidata.org/sparql'
headers = {
'User-Agent': 'CoolBot/0.0 (https://example.org/coolbot/; coolbot@example.org)'
}
for attempt in range(4): # Try up to four times (initial + 3 retries)
try:
response = requests.get(url, params={'format': 'json', 'query': query}, headers=headers, timeout=10)
response.raise_for_status() # This will raise an HTTPError if the HTTP request returned an unsuccessful status code
return response.json()
except requests.exceptions.HTTPError as http_err:
raise Exception(f"HTTP error occurred: {http_err}") from None
except requests.exceptions.Timeout:
if attempt == 3: # Give up after 3 retries
raise Exception("Maximum retry attempts reached after timeout") from None
time.sleep(2) # Wait for 2 seconds before retrying
except requests.exceptions.RequestException as err:
raise Exception(f"Error occurred during request: {err}") from None
except ValueError as json_err:
raise Exception(f"JSON decoding error: {json_err}\nResponse text: {response.text}") from None
return None # In case the loop completes without returning
# construct URL
LOBID_ENDPOINT="http://lobid.org/resources/search"
P_Q='inCollection.id:"http://lobid.org/resources/HT014846970#!" AND _exists_:issn AND language.label:deutsch'
P_NESTED='subject:subject.notation:340 AND subject.source.id:"https://d-nb.info/gnd/4149423-4"'
LOBID_URL="${LOBID_ENDPOINT}?q=$(jq -nr --arg str "${P_Q}" '$str|@uri')&nested=$(jq -nr --arg str "${P_NESTED}" '$str|@uri')"
# execute http request if file hasn't been already downloaded
[ -f data/zdb-ger-law-issn-all.json ] || curl -H "Accept: application/x-jsonlines" $LOBID_URL > data/zdb-ger-law-issn-all.json
# convert JSONLines into json and select wanted fields
JQ_CMD='
[ .[] | {
id: .id,
title: .title,
issn: (.issn | join(" ")),
alternativeTitle: (if .alternativeTitle then (.alternativeTitle | join("; ")) else null end),
otherTitleInformation: (if .otherTitleInformation then (.otherTitleInformation | join("; ")) else null end),
shortTitle: (if .shortTitle then (.shortTitle | join("; ")) else null end),
}]'
cat data/zdb-ger-law-issn-all.json | jq -cs '.' | jq -r "${JQ_CMD}" > data/zdb-ger-law-issn.json
# convert JSON array into CSV
JQ_JSON_TO_CSV='(map(keys) | add | unique) as $cols | map(. as $row | $cols | map($row[.])) as $rows | $cols, $rows[] | @csv'
cat data/zdb-ger-law-issn.json | jq -r "${JQ_JSON_TO_CSV}" > data/zdb-ger-law-issn.csv
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment