diff --git a/wikidata/data-extraction.ipynb b/wikidata/data-extraction.ipynb index 9f555d54c45343107aa61bea32c06d27000c8911..e7d23cd7d578e0f8a25cda20a856762fa0b54652 100644 --- a/wikidata/data-extraction.ipynb +++ b/wikidata/data-extraction.ipynb @@ -211,7 +211,7 @@ "source": [ "## Manual correction\n", "\n", - "The data has now been downloaded to `data/<name>-chatgpt.csv`. It needs to be cleaned and augmented before upload, for example by loading it into OpenRefine and reconciling the `object` column via the WikiData Reconciliation service. Afterward, remove the object-qid column and recreate it via the \"add column based on this column\" function using `ucell.recon.match.id` GREL expression. \n", + "The data has now been downloaded to `data/<name>-chatgpt.csv`. It needs to be cleaned and augmented before upload, for example by loading it into OpenRefine and reconciling the `object` column via the WikiData Reconciliation service. Afterward, remove the object-qid column and recreate it via the \"add column based on this column\" function using `cell.recon.match.id` GREL expression. \n", "\n", "Otherwise, you can also just look up the terms and fill out the object-qid column manually. \n", "\n", @@ -507,7 +507,7 @@ "\n", "\n", "# main function\n", - "def update_wikidata(file_path):\n", + "def update_wikidata_from_csv(file_path):\n", " site = Site(\"wikidata\", \"wikidata\")\n", " repo = site.data_repository()\n", "\n", @@ -559,7 +559,7 @@ " previous_object_qid = object_qid\n", " previous_claim = claim\n", "\n", - "update_wikidata('data/Erhard Blankenburg.csv')" + "update_wikidata_from_csv('data/Erhard Blankenburg.csv')" ], "metadata": { "collapsed": false,