[06_calc_e.py] Scaling E-Field and writing under e_scaled.hdf
Hi again,
some confusing thing: I think the E-Field simulation ran through almost properly. Somehow after the last coil position calculation skript06 didn't proceed anymore and was hanging, so that the scaling and writing in e_scaled.hdf wasn't done. I stopped it then this morning (last coil position was calculated yesterday around 20:00).
console was approx 12h like this:
[ simnibs ]INFO: Running Simulation 1005 out of 1005
[ simnibs ]INFO: Using solver options: -ksp_type cg -ksp_rtol 1e-10 -pc_type hypre -pc_hypre_type boomeramg -pc_hypre_boomeramg_coarsen_type HMIS
[ simnibs ]INFO: Preparing the KSP
[ simnibs ]INFO: Time to prepare the KSP: 7.20s
[ simnibs ]INFO: Solving system 1 of 1
[ simnibs ]INFO: Running PETSc with KSP: cg PC: hypre
[ simnibs ]INFO: Number of iterations: 39 Residual Norm: 1.09e-10
[ simnibs ]INFO: KSP Converged with reason: 2
[ simnibs ]INFO: Time to solve system: 25.46s
Especially, with stopping everything the keyboard interrupt showed somehow that it was captured yet in the Multiprocessing handling (see following output)
^CProcess ForkPoolWorker-251:
Process ForkPoolWorker-252:
Process ForkPoolWorker-250:
Process ForkPoolWorker-253:
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
[unset]: [unset]: [unset]: [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=59
:
write_line error; fd=-1 buf=:cmd=abort exitcode=59
:
write_line error; fd=-1 buf=:cmd=abort exitcode=59
:
write_line error; fd=-1 buf=:cmd=abort exitcode=59
:
system msg for write_line failure : Bad file descriptor
system msg for write_line failure : Bad file descriptor
system msg for write_line failure : Bad file descriptor
system msg for write_line failure : Bad file descriptor
Traceback (most recent call last):
File "/home/nic/Dokumente/tmsloc_proto/scripts/run_simnibs.py", line 269, in <module>
field=fields
File "/home/nic/Dokumente/tmsloc_proto/scripts/run_simnibs.py", line 248, in tms_many_simulations
[s.get() for s in sims]
File "/home/nic/Dokumente/tmsloc_proto/scripts/run_simnibs.py", line 248, in <listcomp>
[s.get() for s in sims]
File "/home/nic/miniconda3/envs/tms_loco/lib/python3.7/multiprocessing/pool.py", line 651, in get
self.wait(timeout)
File "/home/nic/miniconda3/envs/tms_loco/lib/python3.7/multiprocessing/pool.py", line 648, in wait
self._event.wait(timeout)
File "/home/nic/miniconda3/envs/tms_loco/lib/python3.7/threading.py", line 552, in wait
signaled = self._cond.wait(timeout)
File "/home/nic/miniconda3/envs/tms_loco/lib/python3.7/threading.py", line 296, in wait
waiter.acquire()
KeyboardInterrupt
Nevertheless, e.hdf was created in the results folder also approx 20:00 yesterday. So I tried to ran script6 again with commenting out the following run_simnibs.py lines, to only apply the initiation of the hdf-file.
# Running electric field simulations
# ========================================================================================================
#scripts_folder = pathlib.Path(__file__).parent.absolute()
#os.system(f"{sys.executable} " + os.path.join(scripts_folder, "run_simnibs.py") +
# " --folder " + fn_out + " --fn_subject " + fn_subject + " --fn_coilpos " + fn_coilpos_hdf5 +
# " --cpus " + str(n_cpu) + " --anisotropy_type " + anisotropy_type + " -m " + mesh_idx +
# " -r " + str(roi_idx))```
But not sure if this has worked out as supposed, at least e_scaled.hdf was created.