diff --git a/langchain-experiments/compare-models.ipynb b/langchain-experiments/compare-models.ipynb
index 3e9f2dde7999a38bccde4cf79c2d697e24a11520..d6f8a3034c6f13365930d93e66e8a07d2b2342b9 100644
--- a/langchain-experiments/compare-models.ipynb
+++ b/langchain-experiments/compare-models.ipynb
@@ -251,7 +251,7 @@
    "source": [
     "## TheBloke/Llama-2-70B-chat-GPTQ via Huggingface Inference Endpoint\n",
     "\n",
-    "The 70 billion parameter variant [does a bit better](data/output/editors-llama-2-70b-chat-gptq.csv) but, among other things, doesn't the academic titles right. It also cannot be persuaded to [not comment on the CSV output].(data/output/editors-llama-2-70b-chat-gptq.txt). Given that the model costs $13/h to run, that's not really that impressive."
+    "The 70 billion parameter variant [does a bit better](data/output/editors-llama-2-70b-chat-gptq.csv) but, among other things, doesn't the academic titles right. It also cannot be persuaded to [not comment on the CSV output](data/output/editors-llama-2-70b-chat-gptq.txt). Given that the model costs $13/h to run, the result is not impressive."
    ],
    "metadata": {
     "collapsed": false