The DCP compiler doesnt seem to overwork the processor ..
this is taken from a compilation of a 1hr film from DPX / PCM 5.1 master. Compilation rate is around 13fps using 3 slave servers.
I would have expected the processor to be working it's backside off which if it did, would maybe shorten the total time from 3hrs to 15 minutes?
Ineffective coding?
-
- Posts: 30
- Joined: Mon Jan 15, 2018 4:20 pm
Ineffective coding?
You do not have the required permissions to view the files attached to this post.
-
- Posts: 2806
- Joined: Tue Apr 15, 2014 9:11 pm
- Location: Germany
Re: Ineffective coding?
Not a typical screenshot, we need to know
- specs of all systems (type and number of CPUs, RAM)
- DCP-o-matic version number
- encoding thread setting for master and slaves
- network bandwith
- disk setup (master machine)
- type of source footage
- target DCP parameters
In a default installation, DCP-o-matic will detect all logical cores automatically, that is for both local master, as well as encoding servers, and will max out all cores during encoding. There are scenarios where that may not always work, depending on network speed, disk speed, etc. Also, the DCP creation is split in different parts, a short preprocessing/setup part, a long j2c encoding part, a short to medium time post-processing part. Only the middle j2c encoding uses heavy multithreading and will use all local and remote CPU resources fully. During pre- or postprocessing, CPUs may indeed appear unloaded at times, remote servers will do absolutely nothing.
Consider e.g. two typical i7 systems, each with one 4coreHT CPU. If you start DCP-o-matic on the master machine, it will see and configure 8 threads to be used for local encoding. When the remote encoder is started on the second machine, it will see 8 logical CPUs as well.
When you start encoding, the master will do a bit of preprocessing locally, then it will start encoding on all 8 local threads, at the same time looking for remote servers on the network. When detected, it will send uncompressed images over the network to the remote server, as many as the remote server offers threads (8, in this case). So, in total, 16 threads are running in parallel, doing j2c encoding. In DCP-o-matic versions <2.7x, the J2C encoding and thread balancing was not optimized and cores would only be loaded to say 80-90%. Since 2.7x, this has improved, when I configure DCP-o-matic to use all available threads/logical CPUs on my machines, I see 100% CPU load during j2c encoding.
Certain aspects of the conversion may unload CPUs - slow disc access, slow network speed, unsufficent amount of RAM, weird source footage, bad thread settings. I suggest to try the Sintel benchmark from the DCP-o-matic website. You are using DPX, it is not impossible that this causes a slowdown, as there were some issues around DPX recently. Are these DPX from Resolve?
- Carsten
- specs of all systems (type and number of CPUs, RAM)
- DCP-o-matic version number
- encoding thread setting for master and slaves
- network bandwith
- disk setup (master machine)
- type of source footage
- target DCP parameters
In a default installation, DCP-o-matic will detect all logical cores automatically, that is for both local master, as well as encoding servers, and will max out all cores during encoding. There are scenarios where that may not always work, depending on network speed, disk speed, etc. Also, the DCP creation is split in different parts, a short preprocessing/setup part, a long j2c encoding part, a short to medium time post-processing part. Only the middle j2c encoding uses heavy multithreading and will use all local and remote CPU resources fully. During pre- or postprocessing, CPUs may indeed appear unloaded at times, remote servers will do absolutely nothing.
Consider e.g. two typical i7 systems, each with one 4coreHT CPU. If you start DCP-o-matic on the master machine, it will see and configure 8 threads to be used for local encoding. When the remote encoder is started on the second machine, it will see 8 logical CPUs as well.
When you start encoding, the master will do a bit of preprocessing locally, then it will start encoding on all 8 local threads, at the same time looking for remote servers on the network. When detected, it will send uncompressed images over the network to the remote server, as many as the remote server offers threads (8, in this case). So, in total, 16 threads are running in parallel, doing j2c encoding. In DCP-o-matic versions <2.7x, the J2C encoding and thread balancing was not optimized and cores would only be loaded to say 80-90%. Since 2.7x, this has improved, when I configure DCP-o-matic to use all available threads/logical CPUs on my machines, I see 100% CPU load during j2c encoding.
Certain aspects of the conversion may unload CPUs - slow disc access, slow network speed, unsufficent amount of RAM, weird source footage, bad thread settings. I suggest to try the Sintel benchmark from the DCP-o-matic website. You are using DPX, it is not impossible that this causes a slowdown, as there were some issues around DPX recently. Are these DPX from Resolve?
- Carsten