upload big files to dariah
the increased gunicorn worker timeout leads to a behavior where requests are not longer served during a long running task
==> user found a way around this: create new dataset revision with the big files from the old revision
Also, it is very hard to counter this DOS-"attack" since an open browser tab fires intercooler requests to the API and starts the publishing process again and again. This issue MUST be resolved.
current log output:
Attaching to discuss-data_django_1, discuss-data_qcluster_1, discuss-data_cms_1, discuss-data_redis_1, discuss-data_postgres_1, discuss-data_prometheus_1, discuss-data_postgres_metrics_1, discuss-data_ingress_1, discuss-data_redis_metrics_1, discuss-data_media_1, discuss-data_elasticsearch_metrics_1, discuss-data_elasticsearch_1
elasticsearch_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:25,410Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sdb1)]], net usable_space [2.5tb], net total_space [2.9tb], types [ext4]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:25,413Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "heap size [990.7mb], compressed ordinary object pointers [true]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:25,522Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "node name [f169374dc499], node ID [KqjWyPyiQfqDItGnALMTAA], cluster name [docker-cluster]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:25,523Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "version[7.5.2], pid[1], build[default/docker/8bec50e1e0ad29dad5653712cf3bb580cd1afcdf/2020-01-15T12:11:52.313576Z], OS[Linux/4.15.0-163-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/13.0.1/13.0.1+9]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:25,523Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "JVM home [/usr/share/elasticsearch/jdk]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:25,523Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-963004302791013167, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -XX:MaxDirectMemorySize=536870912, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,177Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [aggs-matrix-stats]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,177Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [analysis-common]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,178Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [flattened]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,178Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [frozen-indices]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,178Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [ingest-common]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,178Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [ingest-geoip]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,178Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [ingest-user-agent]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,178Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [lang-expression]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,178Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [lang-mustache]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,179Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [lang-painless]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,179Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [mapper-extras]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,179Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [parent-join]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,179Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [percolator]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,179Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [rank-eval]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,180Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [reindex]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,180Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [repository-url]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,180Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [search-business-rules]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,180Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [spatial]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,180Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [transform]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,181Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [transport-netty4]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,181Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [vectors]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,181Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-analytics]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,181Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-ccr]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,181Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-core]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,182Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-deprecation]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,182Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-enrich]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,182Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-graph]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,182Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-ilm]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,182Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-logstash]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,182Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-ml]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,182Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-monitoring]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,183Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-rollup]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,183Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-security]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,183Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-sql]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,183Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-voting-only-node]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,183Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "loaded module [x-pack-watcher]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:30,184Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "no plugins loaded" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:33,565Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }
cms_1 | PostgreSQL is available
cms_1 | Waiting for ElasticSearch to become available...
cms_1 | Waiting for ElasticSearch to become available...
cms_1 | Waiting for ElasticSearch to become available...
cms_1 | Waiting for ElasticSearch to become available...
cms_1 | Waiting for ElasticSearch to become available...
cms_1 | Waiting for ElasticSearch to become available...
django_1 | PostgreSQL is available
django_1 | Waiting for ElasticSearch to become available...
django_1 | Waiting for ElasticSearch to become available...
django_1 | Waiting for ElasticSearch to become available...
django_1 | Waiting for ElasticSearch to become available...
ingress_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
ingress_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
ingress_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
ingress_1 | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
ingress_1 | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
ingress_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
ingress_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
ingress_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
ingress_1 | 2022/01/05 15:11:21 [emerg] 1#1: host not found in upstream "cms" in /etc/nginx/nginx.conf:21
ingress_1 | nginx: [emerg] host not found in upstream "cms" in /etc/nginx/nginx.conf:21
ingress_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
ingress_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
ingress_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
ingress_1 | 10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled
ingress_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
ingress_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
ingress_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
ingress_1 | 2022/01/05 15:11:24 [emerg] 1#1: host not found in upstream "cms" in /etc/nginx/nginx.conf:21
ingress_1 | nginx: [emerg] host not found in upstream "cms" in /etc/nginx/nginx.conf:21
ingress_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
ingress_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
ingress_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
ingress_1 | 10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled
ingress_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
ingress_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
ingress_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
ingress_1 | 2022/01/05 15:11:30 [error] 24#24: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 2022/01/05 15:11:30 [error] 24#24: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:30 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:30 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 2022/01/05 15:11:31 [error] 24#24: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:31 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 2022/01/05 15:11:32 [error] 24#24: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:32 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
elasticsearch_metrics_1 | level=info ts=2022-01-05T15:11:23.304008589Z caller=clusterinfo.go:200 msg="triggering initial cluster info call"
elasticsearch_metrics_1 | level=info ts=2022-01-05T15:11:23.304120823Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label"
elasticsearch_metrics_1 | level=error ts=2022-01-05T15:11:23.340099219Z caller=clusterinfo.go:253 msg="failed to get cluster info" err="Get http://elasticsearch:9200/: dial tcp 172.30.0.3:9200: connect: connection refused"
elasticsearch_metrics_1 | level=error ts=2022-01-05T15:11:23.340291897Z caller=clusterinfo.go:174 msg="failed to retrieve cluster info from ES" err="Get http://elasticsearch:9200/: dial tcp 172.30.0.3:9200: connect: connection refused"
elasticsearch_metrics_1 | level=info ts=2022-01-05T15:11:33.304241983Z caller=main.go:153 msg="initial cluster info call timed out"
elasticsearch_metrics_1 | level=info ts=2022-01-05T15:11:33.304820825Z caller=main.go:188 msg="starting elasticsearch_exporter" addr=:9114
elasticsearch_metrics_1 | level=warn ts=2022-01-05T15:11:33.369013176Z caller=nodes.go:1851 msg="failed to fetch and decode node stats" err="failed to get cluster health from http://elasticsearch:9200_nodes/_local/stats: Get http://elasticsearch:9200/_nodes/_local/stats: dial tcp 172.30.0.3:9200: connect: connection refused"
elasticsearch_metrics_1 | level=warn ts=2022-01-05T15:11:33.369393851Z caller=cluster_health.go:270 msg="failed to fetch and decode cluster health" err="failed to get cluster health from http://elasticsearch:9200/_cluster/health: Get http://elasticsearch:9200/_cluster/health: dial tcp 172.30.0.3:9200: connect: connection refused"
media_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
media_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
media_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
media_1 | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
media_1 | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
media_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
media_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
media_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
media_1 | 2022/01/05 15:11:16 [notice] 1#1: using the "epoll" event method
media_1 | 2022/01/05 15:11:16 [notice] 1#1: nginx/1.21.5
media_1 | 2022/01/05 15:11:16 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
media_1 | 2022/01/05 15:11:16 [notice] 1#1: OS: Linux 4.15.0-163-generic
media_1 | 2022/01/05 15:11:16 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
media_1 | 2022/01/05 15:11:16 [notice] 1#1: start worker processes
media_1 | 2022/01/05 15:11:16 [notice] 1#1: start worker process 32
media_1 | 2022/01/05 15:11:16 [notice] 1#1: start worker process 33
media_1 | 2022/01/05 15:11:16 [notice] 1#1: start worker process 34
media_1 | 2022/01/05 15:11:16 [notice] 1#1: start worker process 35
postgres_metrics_1 | time="2022-01-05T15:11:22Z" level=info msg="Established new database connection to \"postgres:5432\"." source="postgres_exporter.go:878"
postgres_metrics_1 | time="2022-01-05T15:11:23Z" level=info msg="Established new database connection to \"postgres:5432\"." source="postgres_exporter.go:878"
postgres_metrics_1 | time="2022-01-05T15:11:25Z" level=info msg="Established new database connection to \"postgres:5432\"." source="postgres_exporter.go:878"
postgres_metrics_1 | time="2022-01-05T15:11:26Z" level=info msg="Semantic Version Changed on \"postgres:5432\": 0.0.0 -> 11.11.0" source="postgres_exporter.go:1405"
postgres_metrics_1 | time="2022-01-05T15:11:26Z" level=info msg="Starting Server: :9187" source="postgres_exporter.go:1672"
prometheus_1 | level=info ts=2022-01-05T15:11:16.232Z caller=main.go:322 msg="No time or size retention was set so using the default time retention" duration=15d
prometheus_1 | level=info ts=2022-01-05T15:11:16.232Z caller=main.go:360 msg="Starting Prometheus" version="(version=2.23.0, branch=HEAD, revision=26d89b4b0776fe4cd5a3656dfa520f119a375273)"
prometheus_1 | level=info ts=2022-01-05T15:11:16.233Z caller=main.go:365 build_context="(go=go1.15.5, user=root@37609b3a0a21, date=20201126-10:56:17)"
prometheus_1 | level=info ts=2022-01-05T15:11:16.233Z caller=main.go:366 host_details="(Linux 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 11:55:11 UTC 2021 x86_64 6cd2e596da15 (none))"
prometheus_1 | level=info ts=2022-01-05T15:11:16.233Z caller=main.go:367 fd_limits="(soft=1048576, hard=1048576)"
prometheus_1 | level=info ts=2022-01-05T15:11:16.233Z caller=main.go:368 vm_limits="(soft=unlimited, hard=unlimited)"
prometheus_1 | level=info ts=2022-01-05T15:11:16.236Z caller=main.go:722 msg="Starting TSDB ..."
prometheus_1 | level=info ts=2022-01-05T15:11:16.236Z caller=web.go:528 component=web msg="Start listening for connections" address=0.0.0.0:9090
prometheus_1 | level=info ts=2022-01-05T15:11:16.322Z caller=head.go:645 component=tsdb msg="Replaying on-disk memory mappable chunks if any"
prometheus_1 | level=info ts=2022-01-05T15:11:16.322Z caller=head.go:659 component=tsdb msg="On-disk memory mappable chunks replay completed" duration=5.427µs
prometheus_1 | level=info ts=2022-01-05T15:11:16.322Z caller=head.go:665 component=tsdb msg="Replaying WAL, this may take a while"
prometheus_1 | level=info ts=2022-01-05T15:11:16.323Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
prometheus_1 | level=info ts=2022-01-05T15:11:16.323Z caller=head.go:722 component=tsdb msg="WAL replay completed" checkpoint_replay_duration=116.514µs wal_replay_duration=677.734µs total_replay_duration=845.292µs
prometheus_1 | level=info ts=2022-01-05T15:11:16.329Z caller=main.go:742 fs_type=EXT4_SUPER_MAGIC
prometheus_1 | level=info ts=2022-01-05T15:11:16.329Z caller=main.go:745 msg="TSDB started"
prometheus_1 | level=info ts=2022-01-05T15:11:16.329Z caller=main.go:871 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
prometheus_1 | level=info ts=2022-01-05T15:11:16.346Z caller=main.go:902 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=16.505197ms remote_storage=3.973µs web_handler=780ns query_engine=1.845µs scrape=12.215961ms scrape_sd=268.479µs notify=1.861µs notify_sd=6.53µs rules=2.386µs
prometheus_1 | level=info ts=2022-01-05T15:11:16.346Z caller=main.go:694 msg="Server is ready to receive web requests."
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2022-01-05 15:11:23.191 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2022-01-05 15:11:23.191 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2022-01-05 15:11:23.429 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2022-01-05 15:11:23.714 UTC [25] LOG: database system was shut down at 2022-01-05 15:10:59 UTC
postgres_1 | 2022-01-05 15:11:23.888 UTC [26] FATAL: the database system is starting up
postgres_1 | 2022-01-05 15:11:23.918 UTC [1] LOG: database system is ready to accept connections
qcluster_1 | PostgreSQL is available
qcluster_1 | Waiting for ElasticSearch to become available...
qcluster_1 | Waiting for ElasticSearch to become available...
qcluster_1 | Waiting for ElasticSearch to become available...
qcluster_1 | Waiting for ElasticSearch to become available...
qcluster_1 | Waiting for ElasticSearch to become available...
qcluster_1 | Waiting for ElasticSearch to become available...
redis_1 | 1:C 05 Jan 2022 15:11:22.928 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 05 Jan 2022 15:11:22.928 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 05 Jan 2022 15:11:22.928 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 05 Jan 2022 15:11:22.931 * Running mode=standalone, port=6379.
redis_1 | 1:M 05 Jan 2022 15:11:22.932 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 05 Jan 2022 15:11:22.932 # Server initialized
redis_1 | 1:M 05 Jan 2022 15:11:22.932 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 05 Jan 2022 15:11:22.932 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 05 Jan 2022 15:11:22.932 * Ready to accept connections
redis_metrics_1 | time="2022-01-05T15:11:18Z" level=info msg="Redis Metrics Exporter v1.11.1 build date: 2020-08-28-17:21:19 sha1: 3d94cd439e70d3ab478bfa65c1f131ab978a60ad Go: go1.15 GOOS: linux GOARCH: amd64"
redis_metrics_1 | time="2022-01-05T15:11:18Z" level=info msg="Providing metrics at :9121/metrics"
django_1 | Waiting for ElasticSearch to become available...
cms_1 | Waiting for ElasticSearch to become available...
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:34,334Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "[controller/107] [Main.cc@110] controller (64 bit): Version 7.5.2 (Build 68f6981dfb8e2d) Copyright (c) 2020 Elasticsearch BV" }
qcluster_1 | Waiting for ElasticSearch to become available...
ingress_1 | 2022/01/05 15:11:34 [error] 24#24: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:34 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:34,916Z", "level": "DEBUG", "component": "o.e.a.ActionModule", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "Using REST wrapper from plugin org.elasticsearch.xpack.security.Security" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:35,087Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "using discovery type [single-node] and seed hosts providers [settings]" }
django_1 | Waiting for ElasticSearch to become available...
cms_1 | Waiting for ElasticSearch to become available...
qcluster_1 | Waiting for ElasticSearch to become available...
ingress_1 | 2022/01/05 15:11:35 [error] 24#24: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:35 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:35,825Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "initialized" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:35,826Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "starting ..." }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:35,987Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "publish_address {172.28.0.2:9300}, bound_addresses {0.0.0.0:9300}" }
django_1 | Waiting for ElasticSearch to become available...
cms_1 | Waiting for ElasticSearch to become available...
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:36,254Z", "level": "WARN", "component": "o.e.b.BootstrapChecks", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:36,256Z", "level": "INFO", "component": "o.e.c.c.Coordinator", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "cluster UUID [ufiF_icnSVG7AMOXV2L-lw]" }
qcluster_1 | Waiting for ElasticSearch to become available...
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:36,447Z", "level": "INFO", "component": "o.e.c.s.MasterService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "elected-as-master ([1] nodes joined)[{f169374dc499}{KqjWyPyiQfqDItGnALMTAA}{-d5pVfR_S-2EWjPonorAvw}{172.28.0.2}{172.28.0.2:9300}{dilm}{ml.machine_memory=8352382976, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 123, version: 2094, delta: master node changed {previous [], current [{f169374dc499}{KqjWyPyiQfqDItGnALMTAA}{-d5pVfR_S-2EWjPonorAvw}{172.28.0.2}{172.28.0.2:9300}{dilm}{ml.machine_memory=8352382976, xpack.installed=true, ml.max_open_jobs=20}]}" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:36,662Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "master node changed {previous [], current [{f169374dc499}{KqjWyPyiQfqDItGnALMTAA}{-d5pVfR_S-2EWjPonorAvw}{172.28.0.2}{172.28.0.2:9300}{dilm}{ml.machine_memory=8352382976, xpack.installed=true, ml.max_open_jobs=20}]}, term: 123, version: 2094, reason: Publication{term=123, version=2094}" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:36,726Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "publish_address {172.28.0.2:9200}, bound_addresses {0.0.0.0:9200}", "cluster.uuid": "ufiF_icnSVG7AMOXV2L-lw", "node.id": "KqjWyPyiQfqDItGnALMTAA" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:36,726Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "started", "cluster.uuid": "ufiF_icnSVG7AMOXV2L-lw", "node.id": "KqjWyPyiQfqDItGnALMTAA" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:37,163Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "license [27e0902d-d923-4729-bd0f-ca4295c85c71] mode [basic] - valid", "cluster.uuid": "ufiF_icnSVG7AMOXV2L-lw", "node.id": "KqjWyPyiQfqDItGnALMTAA" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:37,164Z", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "Active license is now [BASIC]; Security is disabled", "cluster.uuid": "ufiF_icnSVG7AMOXV2L-lw", "node.id": "KqjWyPyiQfqDItGnALMTAA" }
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:37,179Z", "level": "INFO", "component": "o.e.g.GatewayService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "recovered [5] indices into cluster_state", "cluster.uuid": "ufiF_icnSVG7AMOXV2L-lw", "node.id": "KqjWyPyiQfqDItGnALMTAA" }
django_1 | Waiting for ElasticSearch to become available...
cms_1 | Waiting for ElasticSearch to become available...
qcluster_1 | Waiting for ElasticSearch to become available...
ingress_1 | 2022/01/05 15:11:37 [error] 24#24: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:37 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
elasticsearch_1 | {"type": "server", "timestamp": "2022-01-05T15:11:38,182Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "f169374dc499", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[datasets][0]]]).", "cluster.uuid": "ufiF_icnSVG7AMOXV2L-lw", "node.id": "KqjWyPyiQfqDItGnALMTAA" }
cms_1 | yellow
cms_1 | ElasticSearch is available
django_1 | yellow
django_1 | ElasticSearch is available
qcluster_1 | yellow
qcluster_1 | ElasticSearch is available
ingress_1 | 2022/01/05 15:11:38 [error] 24#24: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:38 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 2022/01/05 15:11:40 [error] 24#24: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:40 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 2022/01/05 15:11:41 [error] 24#24: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:41 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 2022/01/05 15:11:43 [error] 24#24: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:43 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 2022/01/05 15:11:44 [error] 24#24: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:44 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
qcluster_1 | 15:11:45 [Q] INFO Q Cluster triple-stairway-april-golf starting.
qcluster_1 | 15:11:45 [Q] INFO Process-1:1 ready for work at 139
qcluster_1 | 15:11:45 [Q] INFO Process-1:2 ready for work at 140
qcluster_1 | 15:11:45 [Q] INFO Process-1:3 ready for work at 141
qcluster_1 | 15:11:45 [Q] INFO Process-1:4 ready for work at 142
qcluster_1 | 15:11:45 [Q] INFO Process-1 guarding cluster triple-stairway-april-golf
qcluster_1 | 15:11:45 [Q] INFO Process-1:5 monitoring at 143
qcluster_1 | 15:11:45 [Q] INFO Process-1:6 pushing tasks at 144
qcluster_1 | 15:11:45 [Q] INFO Q Cluster triple-stairway-april-golf running.
ingress_1 | 2022/01/05 15:11:46 [error] 24#24: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:46 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 2022/01/05 15:11:47 [error] 24#24: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:47 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 2022/01/05 15:11:49 [error] 24#24: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:49 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
django_1 |
django_1 | 0 static files copied to '/app/staticfiles', 354 unmodified, 749 post-processed.
cms_1 |
cms_1 | 0 static files copied to '/app/staticfiles', 354 unmodified, 749 post-processed.
ingress_1 | 2022/01/05 15:11:50 [error] 24#24: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:11:50 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
django_1 | [2022-01-05 15:11:50 +0000] [129] [INFO] Starting gunicorn 20.0.4
cms_1 | [2022-01-05 15:11:50 +0000] [140] [INFO] Starting gunicorn 20.0.4
django_1 | [2022-01-05 15:11:50 +0000] [129] [INFO] Listening at: http://0.0.0.0:5000 (129)
django_1 | [2022-01-05 15:11:50 +0000] [129] [INFO] Using worker: sync
cms_1 | [2022-01-05 15:11:50 +0000] [140] [INFO] Listening at: http://0.0.0.0:5000 (140)
cms_1 | [2022-01-05 15:11:50 +0000] [140] [INFO] Using worker: sync
django_1 | [2022-01-05 15:11:50 +0000] [131] [INFO] Booting worker with pid: 131
cms_1 | [2022-01-05 15:11:50 +0000] [142] [INFO] Booting worker with pid: 142
cms_1 | [2022-01-05 15:11:50 +0000] [143] [INFO] Booting worker with pid: 143
django_1 | [2022-01-05 15:11:50 +0000] [132] [INFO] Booting worker with pid: 132
cms_1 | [2022-01-05 15:11:50 +0000] [144] [INFO] Booting worker with pid: 144
django_1 | [2022-01-05 15:11:50 +0000] [133] [INFO] Booting worker with pid: 133
django_1 | [2022-01-05 15:11:50 +0000] [134] [INFO] Booting worker with pid: 134
cms_1 | [2022-01-05 15:11:50 +0000] [145] [INFO] Booting worker with pid: 145
qcluster_1 | 15:12:15 [Q] ERROR server closed the connection unexpectedly
qcluster_1 | This probably means the server terminated abnormally
qcluster_1 | before or while processing the request.
qcluster_1 |
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:12:52 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 499 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"
ingress_1 | 2022/01/05 15:12:52 [info] 24#24: *2 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=63&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 2022/01/05 15:12:53 [error] 24#24: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 172.25.0.1, server: nginx, request: "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1", upstream: "http://172.25.0.5:5000/dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET", host: "localhost:5000"
ingress_1 | 172.25.0.1 - - [05/Jan/2022:15:12:53 +0000] "GET /dataset/prep/159b9355-e5c3-4433-9a7e-9afa9277074f/edit/publish/final/upload?ic-request=true&&ic-id=27&ic-current-url=%2Fdataset%2Fprep%2F159b9355-e5c3-4433-9a7e-9afa9277074f%2Fedit%2Fpublish%2Ffinal&_method=GET HTTP/1.1" 504 167 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0"