Jekyll2021-09-29T20:36:47+00:00http://codepyre.com/feed.xmlCode PyreAll Code Dies and Burns in TimeARMing Yourself - Working with x86_64 on ARM2020-02-06T05:00:00+00:002020-02-06T05:00:00+00:00http://codepyre.com/2020/02/arming-yourself-x86_64-on-arm<h2 id="introduction">Introduction</h2>
<p>In a prior post, <a href="/2019/12/arming-yourself/">ARMing Yourself - Working with ARM on x86_64</a>, I showed how to transparently run <code class="language-plaintext highlighter-rouge">arm32v7</code> and <code class="language-plaintext highlighter-rouge">arm64v8</code> on <code class="language-plaintext highlighter-rouge">x86_64</code> enabling the configuration/creation of ARM images and <code class="language-plaintext highlighter-rouge">debootstrap</code>‘ped <a href="/2019/08/building-custom-root-filesystems">root file systems</a>. We can do the same on ARM to allow the transparent execution of <code class="language-plaintext highlighter-rouge">x86_64</code> containers/chroots.</p>
<p><code class="language-plaintext highlighter-rouge">TLDR</code> - Assuming you have installed the packages in the <a href="#required-packages-(ubuntu-18.04)">Required Packages (Ubuntu 18.04)</a> section, go to <a href="#the-how">The How</a> section below.</p>
<h2 id="required-packages-ubuntu-1804">Required Packages (Ubuntu 18.04)</h2>
<p>We need to install <code class="language-plaintext highlighter-rouge">QEMU</code> and <code class="language-plaintext highlighter-rouge">binfmt</code> support so that we can leverage the <a href="https://en.wikipedia.org/wiki/Binfmt_misc">binfmt_misc</a> support in the Linux kernel.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt-get update
<span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="nt">--no-install-recommends</span> <span class="se">\</span>
qemu qemu-system-misc qemu-user-static qemu-user binfmt-support
</code></pre></div></div>
<h2 id="the-what">The What</h2>
<p>The details behind the next steps is explained in detail in the <a href="/2019/12/arming-yourself/#the-what">The What</a> section of <a href="/2019/12/arming-yourself/">ARMing Yourself</a>. We’re going to summarize here.</p>
<h3 id="getting-the-magic-and-mask">Getting the Magic and Mask</h3>
<p>We’ll need to set up a string in the format <code class="language-plaintext highlighter-rouge">:name:type:offset:magic:mask:interpreter:flags</code> for <code class="language-plaintext highlighter-rouge">x86_64</code>. With <code class="language-plaintext highlighter-rouge">QEMU</code> and <code class="language-plaintext highlighter-rouge">binfmt-support</code> installed, getting the magic and header is straightforward:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">cat</span> /var/lib/binfmts/qemu-x86_64
qemu-user-static
0
<span class="se">\x</span>7f<span class="se">\x</span>45<span class="se">\x</span>4c<span class="se">\x</span>46<span class="se">\x</span>02<span class="se">\x</span>01<span class="se">\x</span>01<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>02<span class="se">\x</span>00<span class="se">\x</span>3e<span class="se">\x</span>00
<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>fe<span class="se">\x</span>fe<span class="se">\x</span><span class="nb">fc</span><span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>fe<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff
/usr/bin/qemu-x86_64-static
<span class="nb">yes</span>
</code></pre></div></div>
<p>Note that <code class="language-plaintext highlighter-rouge">\x7f\x45\x4c\x46</code> is <code class="language-plaintext highlighter-rouge">\\x7fELF</code>, the <a href="http://man7.org/linux/man-pages/man5/elf.5.html">ELF</a> format <a href="https://en.wikipedia.org/wiki/Magic_number_(programming)">magic number</a>.</p>
<ul>
<li>Our magic is:
<ul>
<li><code class="language-plaintext highlighter-rouge">\x7f\x45\x4c\x46\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x3e\x00</code></li>
</ul>
</li>
<li>Our mask:
<ul>
<li>We can use the same masks discussed in <a href="/2019/12/arming-yourself/#the-what">The What</a>.
<code class="language-plaintext highlighter-rouge">\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\xfe\xff\xff\xff</code></li>
</ul>
</li>
</ul>
<h2 id="the-how">The How</h2>
<h3 id="write-the-failing-test">Write the Failing Test</h3>
<p>If you try to run container with an unknown format, it should error with something close to</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">standard_init_linux.go:211: exec user process caused "exec format error"</code></li>
<li><code class="language-plaintext highlighter-rouge">standard_init_linux.go:211: exec user process caused "no such file or directory"</code></li>
</ul>
<p>Go ahead and give it a try, we’ll come back to these after configuring the system to verify functionality.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker run amd64/ubuntu <span class="nb">uname</span> <span class="nt">-m</span>
standard_init_linux.go:211: <span class="nb">exec </span>user process caused <span class="s2">"exec format error"</span>
</code></pre></div></div>
<h3 id="configuration">Configuration</h3>
<p>With the details hashed out, it is time to set up our <code class="language-plaintext highlighter-rouge">systemd-binfmt.service</code> configuration:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Create the binfmt.d directory which is read at boot to configure</span>
<span class="c"># additional binary executable formats which can be handled by the system.</span>
<span class="nb">sudo mkdir</span> <span class="nt">-p</span> /lib/binfmt.d
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s1">'echo :qemu-x86_64:M::\\x7f\\x45\\x4c\\x46\\x02\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x3e\\x00:\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xfe\\xff\\xff\\xff:/usr/bin/qemu-x86_64-static:F > /lib/binfmt.d/qemu-x86_64-static.conf'</span>
<span class="c"># Restart the service to force an evaluation of the /lib/binfmt.d directory</span>
<span class="nb">sudo </span>systemctl restart systemd-binfmt.service
</code></pre></div></div>
<h3 id="run-the-tests">Run the Tests</h3>
<p>We should now be able to transparently run <code class="language-plaintext highlighter-rouge">x86_64</code> containers/chroots on <code class="language-plaintext highlighter-rouge">arm32v7</code> and <code class="language-plaintext highlighter-rouge">arm64v8</code> hosts transparently.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker run amd64/ubuntu <span class="nb">uname</span> <span class="nt">-m</span>
x86_64
</code></pre></div></div>ARMing Yourself - Working with ARM on x86_642019-12-04T05:00:00+00:002019-12-04T05:00:00+00:00http://codepyre.com/2019/12/arming-yourself<h2 id="introduction">Introduction</h2>
<p>In a prior article, <a href="/2019/07/jetson-containers-introduction">Jetson Containers - Introduction</a>, I showed how to bring <code class="language-plaintext highlighter-rouge">QEMU</code> static libraries into the container enabling the configuration/creation of ARM images and <code class="language-plaintext highlighter-rouge">debootstrap</code>‘ped <a href="/2019/08/building-custom-root-filesystems">root file systems</a>. We can make this effort transparent to the <code class="language-plaintext highlighter-rouge">chroot</code> or container with a little bit of effort.</p>
<p><code class="language-plaintext highlighter-rouge">TLDR</code> - Assuming you have installed the packages in the <a href="#required-packages-(ubuntu-18.04)">Required Packages (Ubuntu 18.04)</a> section, go to <a href="#the-how">The How</a> section below.</p>
<h2 id="required-packages-ubuntu-1804">Required Packages (Ubuntu 18.04)</h2>
<p>We need to install <code class="language-plaintext highlighter-rouge">QEMU</code> and <code class="language-plaintext highlighter-rouge">binfmt</code> support so that we can leverage the <a href="https://en.wikipedia.org/wiki/Binfmt_misc">binfmt_misc</a> support in the Linux kernel.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt-get update
<span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="nt">--no-install-recommends</span> <span class="se">\</span>
qemu qemu-system-misc qemu-user-static qemu-user binfmt-support
</code></pre></div></div>
<h2 id="the-what">The What</h2>
<p>If you want to understand the kernel support for miscellaneous binary formats in detail, take a look at the <a href="https://www.kernel.org/doc/html/latest/admin-guide/binfmt-misc.html">binfmt-misc</a> documentation.</p>
<p>We’ll need to set up a string in the format <code class="language-plaintext highlighter-rouge">:name:type:offset:magic:mask:interpreter:flags</code> for <code class="language-plaintext highlighter-rouge">arm32v7</code> and <code class="language-plaintext highlighter-rouge">arm64v8</code> (<code class="language-plaintext highlighter-rouge">aarch64</code>). With <code class="language-plaintext highlighter-rouge">QEMU</code> and <code class="language-plaintext highlighter-rouge">binfmt-support</code> installed, this is straightforward:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># arm32v7</span>
<span class="nv">$ </span><span class="nb">cat</span> /var/lib/binfmts/qemu-arm
qemu-user-static
magic
0
<span class="se">\x</span>7f<span class="se">\x</span>45<span class="se">\x</span>4c<span class="se">\x</span>46<span class="se">\x</span>01<span class="se">\x</span>01<span class="se">\x</span>01<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>02<span class="se">\x</span>00<span class="se">\x</span>28<span class="se">\x</span>00
<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>00<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>fe<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff
/usr/bin/qemu-arm-static
<span class="nb">yes</span>
<span class="c"># arm64v8 (aarch64)</span>
<span class="nv">$ </span><span class="nb">cat</span> /var/lib/binfmts/qemu-aarch64
qemu-user-static
magic
0
<span class="se">\x</span>7f<span class="se">\x</span>45<span class="se">\x</span>4c<span class="se">\x</span>46<span class="se">\x</span>02<span class="se">\x</span>01<span class="se">\x</span>01<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>00<span class="se">\x</span>02<span class="se">\x</span>00<span class="se">\x</span>b7<span class="se">\x</span>00
<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>00<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>fe<span class="se">\x</span>ff<span class="se">\x</span>ff<span class="se">\x</span>ff
/usr/bin/qemu-aarch64-static
<span class="nb">yes</span>
</code></pre></div></div>
<p>Note that <code class="language-plaintext highlighter-rouge">\x7f\x45\x4c\x46</code> is <code class="language-plaintext highlighter-rouge">\\x7fELF</code>, the <a href="http://man7.org/linux/man-pages/man5/elf.5.html">ELF</a> format <a href="https://en.wikipedia.org/wiki/Magic_number_(programming)">magic number</a>.</p>
<table>
<thead>
<tr>
<th>arm32v7</th>
<th>ident</th>
<th>arm64v8</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>\x7f</td>
<td>ELF MAGIC NUMBER 0</td>
<td>\x7f</td>
<td> </td>
</tr>
<tr>
<td>\x45</td>
<td>ELF MAGIC NUMBER 1</td>
<td>\x45</td>
<td> </td>
</tr>
<tr>
<td>\x4c</td>
<td>ELF MAGIC NUMBER 2</td>
<td>\x4c</td>
<td> </td>
</tr>
<tr>
<td>\x46</td>
<td>ELF MAGIC NUMBER 3</td>
<td>\x46</td>
<td> </td>
</tr>
<tr>
<td>\x01</td>
<td>ei_class</td>
<td>\x02</td>
<td>(bitness): ELFCLASS32 = 1, ELFCLASS64 = 2</td>
</tr>
<tr>
<td>\x01</td>
<td>ei_data</td>
<td>\x01</td>
<td>processor-specific data in the file is two’s complement, little-endian = 1, or two’s complement, big-endian = 2</td>
</tr>
<tr>
<td>\x01</td>
<td>ei_version</td>
<td>\x01</td>
<td>ELF specification version, current = 1</td>
</tr>
<tr>
<td>\x00</td>
<td>e_ident</td>
<td>\x00</td>
<td>Remainder of e_ident block padded out with \x00 to fill 16 bytes</td>
</tr>
<tr>
<td>\x00</td>
<td> </td>
<td>\x00</td>
<td> </td>
</tr>
<tr>
<td>\x00</td>
<td> </td>
<td>\x00</td>
<td> </td>
</tr>
<tr>
<td>\x00</td>
<td> </td>
<td>\x00</td>
<td> </td>
</tr>
<tr>
<td>\x00</td>
<td> </td>
<td>\x00</td>
<td> </td>
</tr>
<tr>
<td>\x00</td>
<td> </td>
<td>\x00</td>
<td> </td>
</tr>
<tr>
<td>\x00</td>
<td> </td>
<td>\x00</td>
<td> </td>
</tr>
<tr>
<td>\x00</td>
<td> </td>
<td>\x00</td>
<td> </td>
</tr>
<tr>
<td>\x00</td>
<td> </td>
<td>\x00</td>
<td> </td>
</tr>
<tr>
<td>\x02</td>
<td>e_type</td>
<td>\x02</td>
<td>Executable = 2</td>
</tr>
<tr>
<td>\x00</td>
<td>e_machine (high)</td>
<td>\x00</td>
<td> </td>
</tr>
<tr>
<td>\x28</td>
<td>e_machine (low)</td>
<td>\xb7</td>
<td>arm32 = 40 (x28), arm64 = 183 (xb7)</td>
</tr>
<tr>
<td>\x00</td>
<td>e_version</td>
<td>\x00</td>
<td> </td>
</tr>
</tbody>
</table>
<p>With the magic found and the header decoded, we have our format flushed out.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>:name:type:offset:magic:mask:interpreter:flags
</code></pre></div></div>
<ul>
<li><code class="language-plaintext highlighter-rouge">name</code>: qemu-arm or qemu-aarch64</li>
<li><code class="language-plaintext highlighter-rouge">type</code>: M to use the format’s magic number for identification</li>
<li><code class="language-plaintext highlighter-rouge">offset</code>: optional and 0 by default, and we’ll use the default.</li>
<li><code class="language-plaintext highlighter-rouge">magic</code>: found above:
<ul>
<li>arm32v7: <code class="language-plaintext highlighter-rouge">\x7f\x45\x4c\x46\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00</code></li>
<li>arm64v8: <code class="language-plaintext highlighter-rouge">\x7f\x45\x4c\x46\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xb7\x00</code></li>
</ul>
</li>
<li>defined <code class="language-plaintext highlighter-rouge">mask</code>: line after magic above
<ul>
<li>arm32v7: <code class="language-plaintext highlighter-rouge">\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff</code></li>
<li>arm64v8: <code class="language-plaintext highlighter-rouge">\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff</code></li>
</ul>
</li>
<li>applied <code class="language-plaintext highlighter-rouge">mask</code>: We don’t care about the <code class="language-plaintext highlighter-rouge">ei_version</code> or the rest of <code class="language-plaintext highlighter-rouge">e_ident</code>. Change those bytes to <code class="language-plaintext highlighter-rouge">\x00</code>
<ul>
<li>arm32v7: <code class="language-plaintext highlighter-rouge">\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\xfe\xff\xff\xff</code></li>
<li>arm64v8: <code class="language-plaintext highlighter-rouge">\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\xfe\xff\xff\xff</code></li>
</ul>
</li>
<li><code class="language-plaintext highlighter-rouge">interpreter</code>: line after the mask above, location of our qemu binary</li>
<li><code class="language-plaintext highlighter-rouge">flags</code>: F - fix binary - interpreter is always available after emulation is installed. This is key for as it makes the interpreter available inside other mount namespaces (like containers) and <code class="language-plaintext highlighter-rouge">chroots</code>.</li>
</ul>
<p><code class="language-plaintext highlighter-rouge">arm32v7</code> Configuration:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>:qemu-arm:M::\x7f\x45\x4c\x46\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:F
</code></pre></div></div>
<p><code class="language-plaintext highlighter-rouge">arm64v8</code> Configuration:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>:qemu-aarch64:M::\x7f\x45\x4c\x46\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xb7\x00:\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\xfe\xff\xff\xff:/usr/bin/qemu-aarch64-static:F
</code></pre></div></div>
<h2 id="the-how">The How</h2>
<h3 id="write-the-failing-test">Write the Failing Test</h3>
<p>If you try to run container with an unknown format, it should error with something close to</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">standard_init_linux.go:211: exec user process caused "exec format error"</code></li>
<li><code class="language-plaintext highlighter-rouge">standard_init_linux.go:211: exec user process caused "no such file or directory"</code></li>
</ul>
<p>Go ahead and give it a try, we’ll come back to these after configuring the system to verify functionality.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker run arm32v7/busybox <span class="nb">uname</span> <span class="nt">-m</span>
standard_init_linux.go:211: <span class="nb">exec </span>user process caused <span class="s2">"exec format error"</span>
</code></pre></div></div>
<p>or</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker run arm64v8/busybox <span class="nb">uname</span> <span class="nt">-m</span>
standard_init_linux.go:211: <span class="nb">exec </span>user process caused <span class="s2">"exec format error
</span></code></pre></div></div>
<h3 id="configuration">Configuration</h3>
<p>With the details hashed out, it is time to set up our <code class="language-plaintext highlighter-rouge">systemd-binfmt.service</code> configuration:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Create the binfmt.d directory which is read at boot to configure</span>
<span class="c"># additional binary executable formats which can be handled by the system.</span>
<span class="nb">sudo mkdir</span> <span class="nt">-p</span> /lib/binfmt.d
<span class="c"># Create a configuration for arm32v7</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s1">'echo :qemu-arm:M::\\x7f\\x45\\x4c\\x46\\x01\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x28\\x00:\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xfe\\xff\\xff\\xff:/usr/bin/qemu-arm-static:F > /lib/binfmt.d/qemu-arm-static.conf'</span>
<span class="c"># Create a configuration for arm64v8</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s1">'echo :qemu-aarch64:M::\\x7f\\x45\\x4c\\x46\\x02\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\xb7\\x00:\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xfe\\xff\\xff\\xff:/usr/bin/qemu-aarch64-static:F > /lib/binfmt.d/qemu-aarch64-static.conf'</span>
<span class="c"># Restart the service to force an evaluation of the /lib/binfmt.d directory</span>
<span class="nb">sudo </span>systemctl restart systemd-binfmt.service
</code></pre></div></div>
<h3 id="run-the-tests">Run the Tests</h3>
<p>We should now be able to pull and run commands from <code class="language-plaintext highlighter-rouge">arm32v7</code> and <code class="language-plaintext highlighter-rouge">arm64v8</code> containers and applications transparently.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker run arm32v7/busybox <span class="nb">uname</span> <span class="nt">-m</span>
armv7l
</code></pre></div></div>
<p>or</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker run arm64v8/busybox <span class="nb">uname</span> <span class="nt">-m</span>
aarch64
</code></pre></div></div>Remote IoT Edge Development in Azure VMs & Template Images2019-11-14T05:00:00+00:002019-11-14T05:00:00+00:00http://codepyre.com/2019/11/remote-iot-edge-development-and-vm-templates<h1 id="remote-iot-edge-development-in-azure-vms--template-images">Remote IoT Edge Development in Azure VMs & Template Images</h1>
<p>Developing Azure IoT Edge modules can take a bit of careful configuration. To standardize on our development environments, we can set up remote VMs and leverage the VS Code Remote SSH extension allowing us to edit and debug from our host environment while keeping our dev environment clean. This also allows us to develop modules for foreign platforms and architectures.</p>
<p>We can also generalize the VM and create an image template from which we can generate new VMs in the future.</p>
<h2 id="setup-prerequisites">Setup Prerequisites</h2>
<ul>
<li>SSH</li>
<li>VS Code</li>
</ul>
<h4 id="ssh-on-linux--macos">SSH on Linux & MacOS</h4>
<p>This should be installed already</p>
<h4 id="ssh-on-windows">SSH on Windows</h4>
<p>In an admin PowerShell console:</p>
<pre><code class="language-PowerShell">Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
</code></pre>
<p>Close the Window. Next time you open a window, you should have SSH available on the command line.</p>
<h4 id="create-ssh-public-key">Create SSH Public Key</h4>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh-keygen <span class="nt">-t</span> rsa <span class="nt">-b</span> 4096 <span class="nt">-C</span> <span class="s2">"your_email@example.com"</span>
<span class="c"># Once done, you’ll need your ssh public key for the VM</span>
<span class="nb">cat </span>C:<span class="se">\U</span>sers<span class="se">\<</span>your_user><span class="se">\.</span>ssh<span class="se">\i</span>d_rsa.pub
</code></pre></div></div>
<h2 id="create-an-azure-virtual-machine">Create an Azure Virtual Machine</h2>
<p>In the Azure portal, <a href="https://ms.portal.azure.com/#create/Microsoft.VirtualMachine">Create a virtual machine</a>:</p>
<ul>
<li>Use a simple name in all lower case</li>
<li>Choose a location near you such as the <code class="language-plaintext highlighter-rouge">North Europe</code> regiona</li>
<li>Choose <code class="language-plaintext highlighter-rouge">Ubuntu Server 18.04 LTS</code> for your image</li>
<li><code class="language-plaintext highlighter-rouge">Standard D2s V3</code> should do for now.</li>
<li>Authentication Type: <code class="language-plaintext highlighter-rouge">SSH Public Key</code></li>
<li>Use a familiar user name such as your AD account</li>
<li>Paste the public key from the above console session</li>
<li>The rest of the defaults will work for now. Click <code class="language-plaintext highlighter-rouge">Review + create</code></li>
</ul>
<h3 id="connect-to-the-vm">Connect to the VM</h3>
<p>From the Virtual machine Overview page:</p>
<ul>
<li>Click <code class="language-plaintext highlighter-rouge">Connect</code></li>
<li>Choose the SSH tab</li>
<li>Copy the ssh credentials</li>
<li>Open a command prompt and verify that you have ssh available</li>
<li>Paste the connection details from the portal, for example: <code class="language-plaintext highlighter-rouge">ssh my_user@192.168.1.206</code></li>
<li>You will be prompted regarding the thumbprint and entering your passphrase.</li>
</ul>
<h3 id="configuring-the-vm">Configuring the VM</h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Install Docker</span>
<span class="nb">sudo </span>apt update
<span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> docker.io
<span class="c"># Configure Docker</span>
<span class="nb">sudo </span>usermod <span class="nt">-aG</span> docker <span class="nv">$USER</span>
<span class="nb">sudo </span>reboot
<span class="c"># Install .NET Core and its dependencies</span>
<span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> apt-transport-https
wget <span class="nt">-q</span> https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb <span class="nt">-O</span> packages-microsoft-prod.deb
<span class="nb">sudo </span>dpkg <span class="nt">-i</span> packages-microsoft-prod.deb
<span class="nb">rm </span>packages-microsoft-prod.deb
<span class="nb">sudo </span>apt update
<span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> dotnet-sdk-3.1
<span class="c"># Install docker-compose and its dependencies</span>
<span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> curl
<span class="nb">sudo </span>curl <span class="nt">-L</span> <span class="s2">"https://github.com/docker/compose/releases/download/1.29.2/docker-compose-</span><span class="si">$(</span><span class="nb">uname</span> <span class="nt">-s</span><span class="si">)</span><span class="s2">-</span><span class="si">$(</span><span class="nb">uname</span> <span class="nt">-m</span><span class="si">)</span><span class="s2">"</span> <span class="nt">-o</span> /usr/local/bin/docker-compose
<span class="c"># Install IoT Edge Hub Dev Simulator and its dependencies</span>
<span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> python3-pip
python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--upgrade</span> pip
<span class="nb">sudo</span> <span class="nt">-H</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--upgrade</span> pip
<span class="nb">sudo</span> <span class="nt">-H</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--ignore-installed</span> PyYAML
<span class="nb">sudo</span> <span class="nt">-H</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--upgrade</span> iotedgehubdev
</code></pre></div></div>
<h2 id="configuring-vs-code">Configuring VS Code</h2>
<h3 id="connect-with-vs-code">Connect With VS Code</h3>
<ul>
<li>Install the <code class="language-plaintext highlighter-rouge">Remote – SSH extension</code></li>
<li>Use <code class="language-plaintext highlighter-rouge">Ctrl+Shift+P</code>, <code class="language-plaintext highlighter-rouge">Remote SSH : Add New SSH Host…</code></li>
<li>Enter your ssh information from the azure portal: <code class="language-plaintext highlighter-rouge">ssh my_user@192.168.1.206</code>, then select your user ssh config folder</li>
<li>Use <code class="language-plaintext highlighter-rouge">Ctrl+Shift+P</code>, <code class="language-plaintext highlighter-rouge">Remote SSH : Connect To Host…</code>, then select your host.</li>
<li>A new copy of VS Code will open. Enter your passphrase. You’ll see the lower left corner of the editor show green and SSH: <code class="language-plaintext highlighter-rouge"><your host></code></li>
</ul>
<h3 id="installing-extensions">Installing Extensions</h3>
<ul>
<li>Click Extensions. You’ll see the list of local and remote installed extensions. Go through your installed extensions and install them in the remote VM via the green install buttons. Install:
<ul>
<li>Azure IoT Edge</li>
<li>C#</li>
<li>Docker</li>
</ul>
</li>
<li>Once all are installed, click the <code class="language-plaintext highlighter-rouge">[reload required]</code> button. This will restart VS Code. Enter your passphrase again.
<code class="language-plaintext highlighter-rouge">Ctrl+Shift+P</code> => <code class="language-plaintext highlighter-rouge">Azure IoT Edge: New IoT Edge Solution</code>
<ul>
<li>Select home folder</li>
<li>Name: testsolution</li>
<li>C# Module</li>
<li>Name: testmodule</li>
<li>localhost:5000/testmodule</li>
</ul>
</li>
</ul>
<p>This will create a sample edge solution. Once the <code class="language-plaintext highlighter-rouge">dotnet restore</code> is finished, VS Code will reload with the workspace selected, enter your SSH passphrase. This is just an example. You can also clone an existing repository and open that folder in VS Code. This is just a full example for demonstration purposes.</p>
<h2 id="getting-started">Getting Started</h2>
<p>With the extensions installed on the remote VM, we can use the IoT Hub Extension to connect to our IoT Hub:</p>
<p><img src="/assets/iot_hub_conn.png" alt="Connect to Iot Hub" /></p>
<p>With our IoT Hub Connection made, we can select our device and configure the remote VM’s simulator for that device:</p>
<p><img src="/assets/setup_simulator.png" alt="Setup IoT Edge Simulator" /></p>
<p>We can now run our application with the simulator in the remote VM. <code class="language-plaintext highlighter-rouge">Ctrl+Shift+P</code> => <code class="language-plaintext highlighter-rouge">Azure IoT Edge: Build and Run IoT Edge Solution in Simulator</code>. Choose the debug template so we can have symbols available and attach to the remote container:</p>
<p><img src="/assets/run_in_simulator.png" alt="Run in Simulator" /></p>
<p>Since we chose the debug template, we can select the debug icon on the left hand side and choose to debug a remote module. Make sure to set a breakpoint in the <code class="language-plaintext highlighter-rouge">PipeMessage</code> method in the <code class="language-plaintext highlighter-rouge">testmodule</code> so you can see the breakpoint hit when the simulated temperature sensor relays telemetry.</p>
<p><img src="/assets/debug_module.png" alt="Debug Remote Module" /></p>
<p>Once you are done, click the SSH connection in the bottom left corner, then disconnect the session.</p>
<p><img src="/assets/disconnect.png" alt="Disconnect" /></p>
<h2 id="creating-a-vm-template">Creating a VM Template</h2>
<p>If you would like to make a template from which other VMs can be created, you can follow these steps.</p>
<h4 id="install-azure-cli">Install Azure CLI</h4>
<p>Follow the <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest">latest Azure CLI installation instructions for your platform</a></p>
<h4 id="turn-vm-into-an-image">Turn VM Into an Image</h4>
<p><code class="language-plaintext highlighter-rouge">warning</code> Make sure you are done with your VM as it will not be usable after this.</p>
<p><code class="language-plaintext highlighter-rouge">note</code> This is for linux VMs only</p>
<p>The following is a summary of instructions from <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image">Azure VM Capture Image</a>. Please read the documentation for detailed explanations.</p>
<p>In your SSH session on the VM:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>waagent <span class="nt">-deprovision</span>+user
enter y <span class="o">[</span>Enter]
<span class="nb">exit</span>
</code></pre></div></div>
<p>On your machine</p>
<pre><code class="language-PowerShell">az login
az account set --subscription <your subscription name>
az vm deallocate --resource-group <rg-name> --name myvmname
az vm generalize --resource-group <rg-name> --name myvmname
# The documentation omits --location, but you need it or you'll get an error
az image create --resource-group <rg-name> --name iotdevvm-template --source myvmname --location northeurope
az vm create --resource-group <rg-name> \
--name iotedevinstance \
--image iotdevvm-template \
--admin-username <user name> \
--ssh-key-values ~/.ssh/id_rsa.pub \
--location northeurope
</code></pre>
<p>Once you’ve created the image from your template image, you’ll need to add your new user the docker group again as the user setup before the imaging has been purged</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>usermod <span class="nt">-aG</span> docker <span class="nv">$USER</span>
</code></pre></div></div>
<p>If you forget to do this, you’ll get permission errors while running remote docker commands.</p>
<h4 id="additional-notes">Additional Notes</h4>
<p>You will likely want to <a href="https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management">Start/Stop VMs during off-hours solution in Azure Automation</a> to save money while you aren’t working on the VM.</p>
<p>You can configure a DNS name for your VM in the VM IP settings so that you can address your VM in a better way than IP such as <code class="language-plaintext highlighter-rouge">myiotedgedev.northeurope.cloudapp.azure.com</code>.</p>
<p>When creating the VM template, you’ll likely want to configure <a href="http://www.linfo.org/etc_skel.html">/etc/skel</a> to configure the environment when new VM instances are created from the template.</p>
<p>Because your dev environment is running in Azure, you get amazing upload and download rates for everything, especially pull/publishing container images.</p>Jetson Containers - Building TensorFlow Object Detection Samples2019-08-30T18:15:00+00:002019-08-30T18:15:00+00:00http://codepyre.com/2019/08/building-tensorflow-object-detection-samples<h1 id="prologue">Prologue</h1>
<p>This post is part of a series covering the NVIDIA Jetson platform. It may help to take a peek through the other posts beforehand.</p>
<ul>
<li><a href="/2019/07/jetson-containers-introduction">Introduction</a></li>
<li><a href="/2019/07/jetson-containers-samples">Building the Samples</a></li>
<li><a href="/2019/07/maximizing-jetson-nano-storage">Maximizing Jetson Nano Dev Kit Storage</a></li>
<li><a href="/2019/07/pushing-images-to-devices">Pushing Images to Devices</a></li>
<li><a href="/2019/07/building-deepstream-images">Building DeepStream Images</a></li>
</ul>
<h1 id="introduction">Introduction</h1>
<p>Installing TensorFlow for Jetson devices seems very straightforward to start. The <a href="https://elinux.org/Jetson_Zoo#TensorFlow">Jetson Zoo</a> lists the packages needed to install TensorFlow and you’re off to go. This works fine if you you install and run everything on the host. If you want to run TensorFlow in a container, then we need to dig deeper. Before going into details, we should look at the image sizes so that there is no shock later.</p>
<p>Installing TensorFlow in a container requires a ton of space. Depending on what dependencies you bring on, your final base images will be <code class="language-plaintext highlighter-rouge">2.76 GB</code> to <code class="language-plaintext highlighter-rouge">7.76 GB</code>. The compiled TensorFlow <code class="language-plaintext highlighter-rouge">.so</code> file is massive. The TensorFlow built by NVIDIA is linked against <code class="language-plaintext highlighter-rouge">cublas</code>, <code class="language-plaintext highlighter-rouge">cudart</code>, <code class="language-plaintext highlighter-rouge">cufft</code>, <code class="language-plaintext highlighter-rouge">curand</code>, <code class="language-plaintext highlighter-rouge">cusolver</code>, <code class="language-plaintext highlighter-rouge">cusparse</code>, <code class="language-plaintext highlighter-rouge">cuDNN</code>, and <code class="language-plaintext highlighter-rouge">TensorRT</code>. If you don’t leverage <code class="language-plaintext highlighter-rouge">cuDNN</code> or <code class="language-plaintext highlighter-rouge">TensorRT</code>, that can save gigabytes off of your images. A little visual aid might help:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>+-------------------------------------------------------------------+
| TensorFlow (1.0 GiB + all of its dependencies) |
+-------------------------------------------------------------------+
+-------------+-------------+-------------+------------+------------+
| L4T Devel |
| (5.67 GiB) ______________________________________________________|
| | cuDNN | TensorRT | Dev | Extra |
| | (720 MiB) | (1.1 GiB) | Versions | Depends |
| | | | of | |
| | | | Libs | |
| | | | | |
+-------------------------------------------------------------------+
+-------------------------------------------------------------------+
| L4T Release (cuda libraries) (1.21 GiB) |
| |
+-------------------------------------------------------------------+
+-------------------------------------------------------------------+
| L4T Base (cuda runtime and profiling apis) (493 MiB) |
+-------------------------------------------------------------------+
+-------------------------------------------------------------------+
| L4T Driver Pack (minimum nvidia libs) (483 MiB) |
+-------------------------------------------------------------------+
+-------------------------------------------------------------------+
| Ubuntu 18.04 (80.4MiB) |
+-------------------------------------------------------------------+
</code></pre></div></div>
<h2 id="tensorflow-base">TensorFlow Base</h2>
<p>We can build our images from <code class="language-plaintext highlighter-rouge">base</code>, <code class="language-plaintext highlighter-rouge">release</code>, or <code class="language-plaintext highlighter-rouge">devel</code> images layering in what we need. Given that we know the exact CUDA libraries we need, we can skip over <code class="language-plaintext highlighter-rouge">release</code> and focus on <code class="language-plaintext highlighter-rouge">base</code> and <code class="language-plaintext highlighter-rouge">devel</code> images. This is a little murky and depends on the definition of <code class="language-plaintext highlighter-rouge">dependency</code> and <code class="language-plaintext highlighter-rouge">required</code> you want to use. TensorFlow is compiled against many things, but we don’t have to use those features. Dockerfiles for TensorFlow along with the <a href="#tensorflow-object-detection-libraries">Object Detection</a> sample can be found in the <a href="https://github.com/idavis/jetson-containers">jetson-containers</a> repository. The <code class="language-plaintext highlighter-rouge">SHA</code>s used in this post are for the Jetson Nano dev kit, copied directly from the JetPack 4.2.1 <code class="language-plaintext highlighter-rouge">devel</code> image Dockerfile. You can build these images out for your device by copying the appropriate lines.</p>
<p>TLDR for images sizes built in the sections below:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>l4t 32.2-nano-dev-jetpack-4.2.1-tensorflow-zoo-base-min 2.76GB
l4t 32.2-nano-dev-jetpack-4.2.1-tensorflow-zoo-base-min-full 4.41GB
l4t 32.2-nano-dev-jetpack-4.2.1-tensorflow-zoo-devel 6.76GB
</code></pre></div></div>
<h3 id="devel">Devel</h3>
<p>If we start off from the <code class="language-plaintext highlighter-rouge">devel</code> images for our device, the TensorFlow installation is pretty straightforward. I’ve moved the <code class="language-plaintext highlighter-rouge">h5py</code> installation into the <code class="language-plaintext highlighter-rouge">apt</code> portion as it is set at the correct version range (as compared to the <a href="https://elinux.org/Jetson_Zoo#TensorFlow">Jetson Zoo</a>).</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">ARG</span><span class="s"> IMAGE_NAME</span>
<span class="k">ARG</span><span class="s"> TAG</span>
<span class="k">FROM</span><span class="s"> ${IMAGE_NAME}:${TAG}-devel as tensorflow-base</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\
</span> build-essential <span class="se">\
</span> libhdf5-dev <span class="se">\
</span> libhdf5-serial-dev <span class="se">\
</span> python3-dev <span class="se">\
</span> python3-h5py <span class="se">\
</span> python3-pip <span class="se">\
</span> python3-setuptools <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--upgrade</span> pip <span class="o">&&</span> <span class="se">\
</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--upgrade</span> setuptools <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">-U</span> numpy grpcio absl-py py-cpuinfo psutil portpicker grpcio six mock requests gast astor termcolor
<span class="c"># Install TensorFlow</span>
<span class="c">#RUN python3 -m pip install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu</span>
<span class="c"># can browse from https://developer.download.nvidia.com/compute/redist/jp/</span>
<span class="c">#RUN python3 -m pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==$TF_VERSION+nv$NV_VERSION</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--extra-index-url</span> https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu<span class="o">==</span>1.14.0+nv19.7
</code></pre></div></div>
<p>This will give us an image we can use as a base for build agents and quickly testing, but we are sitting at <code class="language-plaintext highlighter-rouge">~6.76 GB</code> which is a lot of space for something like a Jetson Nano. With an image that large, you’ll be hard pressed to be able to issue an update of the base image without running out of eMMC storage and you’ll have to have additional storage configured for docker images. If you’re using a TX2 or Xavier, you will probably be fine.</p>
<h3 id="minimum-full">Minimum Full</h3>
<p>Like the <code class="language-plaintext highlighter-rouge">devel</code> sample above, setting up the base image for TensorFlow, which has all required NVIDIA libraries, requires a considerable amount of the <code class="language-plaintext highlighter-rouge">devel</code> image, but we can still make some nice gains. We need to install <code class="language-plaintext highlighter-rouge">cublas-dev</code> and <code class="language-plaintext highlighter-rouge">cudart-dev</code> for <code class="language-plaintext highlighter-rouge">TensorRT</code> - <code class="language-plaintext highlighter-rouge">¯\_(ツ)_/¯</code> - as well as <code class="language-plaintext highlighter-rouge">cublas</code>, <code class="language-plaintext highlighter-rouge">cudart</code>, <code class="language-plaintext highlighter-rouge">cufft</code>, <code class="language-plaintext highlighter-rouge">curand</code>, <code class="language-plaintext highlighter-rouge">cusolver</code>, <code class="language-plaintext highlighter-rouge">cusparse</code> as normal CUDA libs. Then we’ll need to set up <code class="language-plaintext highlighter-rouge">cuDNN</code>, <code class="language-plaintext highlighter-rouge">TensorRT</code>, <code class="language-plaintext highlighter-rouge">Graph Surgeon</code>, <code class="language-plaintext highlighter-rouge">UFF Converter</code>, <code class="language-plaintext highlighter-rouge">OpenCV</code> (with dependencies), and finally TensorFlow.</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">ARG</span><span class="s"> DEPENDENCIES_IMAGE</span>
<span class="k">ARG</span><span class="s"> IMAGE_NAME</span>
<span class="k">ARG</span><span class="s"> TAG</span>
<span class="k">FROM</span><span class="s"> ${DEPENDENCIES_IMAGE} as dependencies</span>
<span class="k">ARG</span><span class="s"> IMAGE_NAME</span>
<span class="k">ARG</span><span class="s"> TAG</span>
<span class="k">FROM</span><span class="s"> ${IMAGE_NAME}:${TAG}-base as tensorflow-base</span>
<span class="c"># CUDA Toolkit for L4T</span>
<span class="c"># TensorRT: cuda-cublas-dev-10-0 cuda-cudart-dev-10-0</span>
<span class="c"># TensorFlow: cuda-cublas-10-0 cuda-cudart-10-0 cuda-cufft-10-0</span>
<span class="c"># cuda-curand-10-0 cuda-cusolver-10-0 cuda-cusparse-10-0</span>
<span class="k">ARG</span><span class="s"> CUDA_TOOLKIT_PKG="cuda-repo-l4t-10-0-local-${CUDA_PKG_VERSION}_arm64.deb"</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/${CUDA_TOOLKIT_PKG} ${CUDA_TOOLKIT_PKG}</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"0e12b2f53c7cbe4233c2da73f7d8e6b4 </span><span class="k">${</span><span class="nv">CUDA_TOOLKIT_PKG</span><span class="k">}</span><span class="s2">"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">--force-all</span> <span class="nt">-i</span> <span class="k">${</span><span class="nv">CUDA_TOOLKIT_PKG</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="k">${</span><span class="nv">CUDA_TOOLKIT_PKG</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> apt-get update <span class="o">&&</span> <span class="se">\
</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="nt">--allow-downgrades</span> <span class="se">\
</span> cuda-cublas-dev-10-0 <span class="se">\
</span> cuda-cudart-dev-10-0 <span class="se">\
</span> cuda-cublas-10-0 <span class="se">\
</span> cuda-cudart-10-0 <span class="se">\
</span> cuda-cufft-10-0 <span class="se">\
</span> cuda-curand-10-0 <span class="se">\
</span> cuda-cusolver-10-0 <span class="se">\
</span> cuda-cusparse-10-0 <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">--purge</span> cuda-repo-l4t-10-0-local-10.0.326 <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="c"># NVIDIA CUDA Deep Neural Network library (cuDNN)</span>
<span class="k">ENV</span><span class="s"> CUDNN_VERSION 7.5.0.56</span>
<span class="k">ENV</span><span class="s"> CUDNN_PKG_VERSION=${CUDA_VERSION}-1</span>
<span class="k">LABEL</span><span class="s"> com.nvidia.cudnn.version="${CUDNN_VERSION}"</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/libcudnn7_$CUDNN_VERSION-1+cuda10.0_arm64.deb libcudnn7_$CUDNN_VERSION-1+cuda10.0_arm64.deb</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"9f30aa86e505a3b83b127ed7a51309a1 libcudnn7_</span><span class="nv">$CUDNN_VERSION</span><span class="s2">-1+cuda10.0_arm64.deb"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> libcudnn7_<span class="nv">$CUDNN_VERSION</span><span class="nt">-1</span>+cuda10.0_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>libcudnn7_<span class="nv">$CUDNN_VERSION</span><span class="nt">-1</span>+cuda10.0_arm64.deb
<span class="k">COPY</span><span class="s"> --from=dependencies /data/libcudnn7-dev_$CUDNN_VERSION-1+cuda10.0_arm64.deb libcudnn7-dev_$CUDNN_VERSION-1+cuda10.0_arm64.deb</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"a010637c80859b2143ef24461ee2ef97 libcudnn7-dev_</span><span class="nv">$CUDNN_VERSION</span><span class="s2">-1+cuda10.0_arm64.deb"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> libcudnn7-dev_<span class="nv">$CUDNN_VERSION</span><span class="nt">-1</span>+cuda10.0_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>libcudnn7-dev_<span class="nv">$CUDNN_VERSION</span><span class="nt">-1</span>+cuda10.0_arm64.deb
<span class="c"># NVIDIA TensorRT</span>
<span class="k">ENV</span><span class="s"> LIBINFER_VERSION 5.1.6</span>
<span class="k">ENV</span><span class="s"> LIBINFER_PKG_VERSION=${LIBINFER_VERSION}-1</span>
<span class="k">LABEL</span><span class="s"> com.nvidia.libinfer.version="${LIBINFER_VERSION}"</span>
<span class="k">ENV</span><span class="s"> LIBINFER_PKG libnvinfer5_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb</span>
<span class="k">ENV</span><span class="s"> LIBINFER_DEV_PKG libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb</span>
<span class="k">ENV</span><span class="s"> LIBINFER_SAMPLES_PKG libnvinfer-samples_${LIBINFER_PKG_VERSION}+cuda10.0_all.deb</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/${LIBINFER_PKG} ${LIBINFER_PKG}</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"dca1e2dadeae2186b57a11861fac7652 </span><span class="k">${</span><span class="nv">LIBINFER_PKG</span><span class="k">}</span><span class="s2">"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> <span class="k">${</span><span class="nv">LIBINFER_PKG</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="k">${</span><span class="nv">LIBINFER_PKG</span><span class="k">}</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/${LIBINFER_DEV_PKG} ${LIBINFER_DEV_PKG}</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"0e0c0c6d427847d5994f04fbce0401d2 </span><span class="k">${</span><span class="nv">LIBINFER_DEV_PKG</span><span class="k">}</span><span class="s2">"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> <span class="k">${</span><span class="nv">LIBINFER_DEV_PKG</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="k">${</span><span class="nv">LIBINFER_DEV_PKG</span><span class="k">}</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/${LIBINFER_SAMPLES_PKG} ${LIBINFER_SAMPLES_PKG}</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"e8f021ea1fad99d99f0f551d7ea3146a </span><span class="k">${</span><span class="nv">LIBINFER_SAMPLES_PKG</span><span class="k">}</span><span class="s2">"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> <span class="k">${</span><span class="nv">LIBINFER_SAMPLES_PKG</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="k">${</span><span class="nv">LIBINFER_SAMPLES_PKG</span><span class="k">}</span>
<span class="k">ENV</span><span class="s"> TENSORRT_VERSION 5.1.6.1</span>
<span class="k">ENV</span><span class="s"> TENSORRT_PKG_VERSION=${TENSORRT_VERSION}-1</span>
<span class="k">LABEL</span><span class="s"> com.nvidia.tensorrt.version="${TENSORRT_VERSION}"</span>
<span class="k">ENV</span><span class="s"> TENSORRT_PKG tensorrt_${TENSORRT_PKG_VERSION}+cuda10.0_arm64.deb</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/${TENSORRT_PKG} ${TENSORRT_PKG}</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"66e6df17b7a92d32dd3465bdfca9fc8d </span><span class="k">${</span><span class="nv">TENSORRT_PKG</span><span class="k">}</span><span class="s2">"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> <span class="k">${</span><span class="nv">TENSORRT_PKG</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="k">${</span><span class="nv">TENSORRT_PKG</span><span class="k">}</span>
<span class="c"># Graph Surgeon</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/graphsurgeon-tf_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb graphsurgeon-tf_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"5729cc195d365335991c58abd75e0c99 graphsurgeon-tf_</span><span class="k">${</span><span class="nv">LIBINFER_PKG_VERSION</span><span class="k">}</span><span class="s2">+cuda10.0_arm64.deb"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> graphsurgeon-tf_<span class="k">${</span><span class="nv">LIBINFER_PKG_VERSION</span><span class="k">}</span>+cuda10.0_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>graphsurgeon-tf_<span class="k">${</span><span class="nv">LIBINFER_PKG_VERSION</span><span class="k">}</span>+cuda10.0_arm64.deb
<span class="c"># UFF Converter</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/uff-converter-tf_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb uff-converter-tf_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"b6310b19820a8b844d36dc597d2bf835 uff-converter-tf_</span><span class="k">${</span><span class="nv">LIBINFER_PKG_VERSION</span><span class="k">}</span><span class="s2">+cuda10.0_arm64.deb"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> uff-converter-tf_<span class="k">${</span><span class="nv">LIBINFER_PKG_VERSION</span><span class="k">}</span>+cuda10.0_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>uff-converter-tf_<span class="k">${</span><span class="nv">LIBINFER_PKG_VERSION</span><span class="k">}</span>+cuda10.0_arm64.deb
<span class="c"># Install dependencies for OpenCV</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="nt">--no-install-recommends</span> <span class="se">\
</span> libavcodec-extra57 <span class="se">\
</span> libavformat57 <span class="se">\
</span> libavutil55 <span class="se">\
</span> libcairo2 <span class="se">\
</span> libgtk2.0-0 <span class="se">\
</span> libswscale4 <span class="se">\
</span> libtbb2 <span class="se">\
</span> libtbb-dev <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="c">## Additional OpenCV dependencies usually installed by the CUDA Toolkit</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> <span class="se">\
</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\
</span> libgstreamer1.0-0 <span class="se">\
</span> libgstreamer-plugins-base1.0-0 <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="c"># Open CV 3.3.1</span>
<span class="k">ENV</span><span class="s"> OPENCV_VERSION 3.3.1</span>
<span class="k">ENV</span><span class="s"> OPENCV_PKG_VERSION=${OPENCV_VERSION}-2-g31ccdfe11</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/libopencv_${OPENCV_PKG_VERSION}_arm64.deb libopencv_${OPENCV_PKG_VERSION}_arm64.deb</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"dd5b571c08a0098141203daec2ea1acc libopencv_</span><span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span><span class="s2">_arm64.deb"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> libopencv_<span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span>_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>libopencv_<span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span>_arm64.deb
<span class="c">## Open CV python binding</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/libopencv-python_${OPENCV_PKG_VERSION}_arm64.deb libopencv-python_${OPENCV_PKG_VERSION}_arm64.deb</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"35776ce159afa78a0fe727d4a3c5b6fa libopencv-python_</span><span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span><span class="s2">_arm64.deb"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> libopencv-python_<span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span>_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>libopencv-python_<span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span>_arm64.deb
</code></pre></div></div>
<p>The rest follows just like from the <code class="language-plaintext highlighter-rouge">devel</code> image to install TensorFlow:</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\
</span> build-essential <span class="se">\
</span> libhdf5-dev <span class="se">\
</span> libhdf5-serial-dev <span class="se">\
</span> python3-dev <span class="se">\
</span> python3-h5py <span class="se">\
</span> python3-pip <span class="se">\
</span> python3-setuptools <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--upgrade</span> pip <span class="o">&&</span> <span class="se">\
</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--upgrade</span> setuptools <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">-U</span> numpy grpcio absl-py py-cpuinfo psutil portpicker grpcio six mock requests gast astor termcolor
<span class="c"># Install TensorFlow</span>
<span class="c">#RUN python3 -m pip install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu</span>
<span class="c"># can browse from https://developer.download.nvidia.com/compute/redist/jp/</span>
<span class="c">#RUN python3 -m pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==$TF_VERSION+nv$NV_VERSION</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--extra-index-url</span> https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu<span class="o">==</span>1.14.0+nv19.7
</code></pre></div></div>
<h3 id="minimum-usable">Minimum Usable</h3>
<p>If you aren’t using <code class="language-plaintext highlighter-rouge">TensorRT</code>, we can drop almost <code class="language-plaintext highlighter-rouge">1.7GB</code> from the image including all of its dependencies.</p>
<p>Note: In this example, we installed <code class="language-plaintext highlighter-rouge">cuDNN</code>, but we can skip this if you don’t need the APIs that leverage it. You will get a warning when running your application that it can’t load the <code class="language-plaintext highlighter-rouge">cuDNN</code>’s <code class="language-plaintext highlighter-rouge">.so</code> file, but you’re app should still be runnable. This will drop the image down to <code class="language-plaintext highlighter-rouge">2.37GB</code>.</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">ARG</span><span class="s"> DEPENDENCIES_IMAGE</span>
<span class="k">ARG</span><span class="s"> IMAGE_NAME</span>
<span class="k">ARG</span><span class="s"> TAG</span>
<span class="k">FROM</span><span class="s"> ${DEPENDENCIES_IMAGE} as dependencies</span>
<span class="k">ARG</span><span class="s"> IMAGE_NAME</span>
<span class="k">ARG</span><span class="s"> TAG</span>
<span class="k">FROM</span><span class="s"> ${IMAGE_NAME}:${TAG}-base as tensorflow-base</span>
<span class="c"># CUDA Toolkit for L4T</span>
<span class="c"># TensorFlow: cuda-cublas-10-0 cuda-cudart-10-0 cuda-cufft-10-0</span>
<span class="c"># cuda-curand-10-0 cuda-cusolver-10-0 cuda-cusparse-10-0</span>
<span class="k">ARG</span><span class="s"> CUDA_TOOLKIT_PKG="cuda-repo-l4t-10-0-local-${CUDA_PKG_VERSION}_arm64.deb"</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/${CUDA_TOOLKIT_PKG} ${CUDA_TOOLKIT_PKG}</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"0e12b2f53c7cbe4233c2da73f7d8e6b4 </span><span class="k">${</span><span class="nv">CUDA_TOOLKIT_PKG</span><span class="k">}</span><span class="s2">"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">--force-all</span> <span class="nt">-i</span> <span class="k">${</span><span class="nv">CUDA_TOOLKIT_PKG</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="k">${</span><span class="nv">CUDA_TOOLKIT_PKG</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> apt-get update <span class="o">&&</span> <span class="se">\
</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="nt">--allow-downgrades</span> <span class="se">\
</span> cuda-cublas-10-0 <span class="se">\
</span> cuda-cudart-10-0 <span class="se">\
</span> cuda-cufft-10-0 <span class="se">\
</span> cuda-curand-10-0 <span class="se">\
</span> cuda-cusolver-10-0 <span class="se">\
</span> cuda-cusparse-10-0 <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">--purge</span> cuda-repo-l4t-10-0-local-10.0.326 <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="c"># NVIDIA CUDA Deep Neural Network library (cuDNN)</span>
<span class="k">ENV</span><span class="s"> CUDNN_VERSION 7.5.0.56</span>
<span class="k">ENV</span><span class="s"> CUDNN_PKG_VERSION=${CUDA_VERSION}-1</span>
<span class="k">LABEL</span><span class="s"> com.nvidia.cudnn.version="${CUDNN_VERSION}"</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/libcudnn7_$CUDNN_VERSION-1+cuda10.0_arm64.deb libcudnn7_$CUDNN_VERSION-1+cuda10.0_arm64.deb</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"9f30aa86e505a3b83b127ed7a51309a1 libcudnn7_</span><span class="nv">$CUDNN_VERSION</span><span class="s2">-1+cuda10.0_arm64.deb"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> libcudnn7_<span class="nv">$CUDNN_VERSION</span><span class="nt">-1</span>+cuda10.0_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>libcudnn7_<span class="nv">$CUDNN_VERSION</span><span class="nt">-1</span>+cuda10.0_arm64.deb
<span class="c"># Install dependencies for OpenCV</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="nt">--no-install-recommends</span> <span class="se">\
</span> libavcodec-extra57 <span class="se">\
</span> libavformat57 <span class="se">\
</span> libavutil55 <span class="se">\
</span> libcairo2 <span class="se">\
</span> libgtk2.0-0 <span class="se">\
</span> libswscale4 <span class="se">\
</span> libtbb2 <span class="se">\
</span> libtbb-dev <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="c">## Additional OpenCV dependencies usually installed by the CUDA Toolkit</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> <span class="se">\
</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\
</span> libgstreamer1.0-0 <span class="se">\
</span> libgstreamer-plugins-base1.0-0 <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="c"># Open CV 3.3.1</span>
<span class="k">ENV</span><span class="s"> OPENCV_VERSION 3.3.1</span>
<span class="k">ENV</span><span class="s"> OPENCV_PKG_VERSION=${OPENCV_VERSION}-2-g31ccdfe11</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/libopencv_${OPENCV_PKG_VERSION}_arm64.deb libopencv_${OPENCV_PKG_VERSION}_arm64.deb</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"dd5b571c08a0098141203daec2ea1acc libopencv_</span><span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span><span class="s2">_arm64.deb"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> libopencv_<span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span>_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>libopencv_<span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span>_arm64.deb
<span class="c">## Open CV python binding</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/libopencv-python_${OPENCV_PKG_VERSION}_arm64.deb libopencv-python_${OPENCV_PKG_VERSION}_arm64.deb</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"35776ce159afa78a0fe727d4a3c5b6fa libopencv-python_</span><span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span><span class="s2">_arm64.deb"</span> | <span class="nb">md5sum</span> <span class="nt">-c</span> - <span class="o">&&</span> <span class="se">\
</span> dpkg <span class="nt">-i</span> libopencv-python_<span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span>_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>libopencv-python_<span class="k">${</span><span class="nv">OPENCV_PKG_VERSION</span><span class="k">}</span>_arm64.deb
</code></pre></div></div>
<p>The rest follows just like from the <code class="language-plaintext highlighter-rouge">devel</code> image to install TensorFlow:</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\
</span> build-essential <span class="se">\
</span> libhdf5-dev <span class="se">\
</span> libhdf5-serial-dev <span class="se">\
</span> python3-dev <span class="se">\
</span> python3-h5py <span class="se">\
</span> python3-pip <span class="se">\
</span> python3-setuptools <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--upgrade</span> pip <span class="o">&&</span> <span class="se">\
</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--upgrade</span> setuptools <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">-U</span> numpy grpcio absl-py py-cpuinfo psutil portpicker grpcio six mock requests gast astor termcolor
<span class="c"># Install TensorFlow</span>
<span class="c">#RUN python3 -m pip install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu</span>
<span class="c"># can browse from https://developer.download.nvidia.com/compute/redist/jp/</span>
<span class="c">#RUN python3 -m pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==$TF_VERSION+nv$NV_VERSION</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--extra-index-url</span> https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu<span class="o">==</span>1.14.0+nv19.7
</code></pre></div></div>
<h2 id="tensorflow-object-detection-libraries">TensorFlow Object Detection Libraries</h2>
<p>You might want to give TensorFlow a spin on your Jetson. We can build off of the base images we’ve created here to layer in the TensorFlow Object Detection APIs. To do this we’ll need to:</p>
<ol>
<li>Package the APIs as wheels to be installed later. (I’d rather not clone the whole repo and run from there)</li>
<li>Install the wheels and their dependencies into our app base.</li>
<li>Install our test app and its dependencies.</li>
</ol>
<p>Adding this sample application will increase your image size by <code class="language-plaintext highlighter-rouge">~1.48GB</code> and the model will be downloaded on application run. You can decrease startup time by bundling the model as a layer.</p>
<h3 id="object-detection-wheels">Object Detection Wheels</h3>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">FROM</span><span class="s"> tensorflow-base as objectdetection-builder</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="nt">--no-install-recommends</span> <span class="se">\
</span> git <span class="se">\
</span> protobuf-compiler <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="c"># Clone the TensorFlow Models Repository</span>
<span class="c"># the release branches usually don't contain the research folder, so we have to use master.</span>
<span class="k">ARG</span><span class="s"> TF_MODELS_VERSION=master</span>
<span class="k">RUN </span>git clone <span class="nt">--depth</span> 1 https://github.com/tensorflow/models.git <span class="nt">-b</span> <span class="k">${</span><span class="nv">TF_MODELS_VERSION</span><span class="k">}</span>
<span class="k">WORKDIR</span><span class="s"> /models/research</span>
<span class="c"># Compile the Protos</span>
<span class="k">RUN </span>protoc object_detection/protos/<span class="k">*</span>.proto <span class="nt">--python_out</span><span class="o">=</span>.
<span class="c"># Build the Wheels</span>
<span class="k">RUN </span>python3 setup.py build <span class="o">&&</span> <span class="se">\
</span> python3 setup.py bdist_wheel <span class="o">&&</span> <span class="se">\
</span> <span class="o">(</span><span class="nb">cd </span>slim <span class="o">&&</span> python3 setup.py bdist_wheel<span class="o">)</span>
</code></pre></div></div>
<h3 id="app-base">App Base</h3>
<p>Now we get to layer in the wheels, data, and app dependencies so that the last bit is simple.</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">FROM</span><span class="s"> tensorflow-base as app-base</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\
</span> libfreetype6-dev <span class="se">\
</span> pkg-config <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="k">COPY</span><span class="s"> --from=objectdetection-builder /models/research/dist/object_detection-0.1-py3-none-any.whl .</span>
<span class="k">COPY</span><span class="s"> --from=objectdetection-builder /models/research/slim/dist/slim-0.1-py3-none-any.whl .</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> object_detection-0.1-py3-none-any.whl <span class="o">&&</span> <span class="se">\
</span> python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> slim-0.1-py3-none-any.whl <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>object_detection-0.1-py3-none-any.whl <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm </span>slim-0.1-py3-none-any.whl
<span class="k">COPY</span><span class="s"> --from=objectdetection-builder /models/research/object_detection/data /data</span>
<span class="c"># App dependencies</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="nt">--no-install-recommends</span> <span class="se">\
</span> libavcodec-dev <span class="se">\
</span> libavformat-dev <span class="se">\
</span> libavutil-dev <span class="se">\
</span> libeigen3-dev <span class="se">\
</span> libglew-dev <span class="se">\
</span> libtiff5-dev <span class="se">\
</span> libjpeg-dev <span class="se">\
</span> libpng-dev <span class="se">\
</span> libpostproc-dev <span class="se">\
</span> libswscale-dev <span class="se">\
</span> libtbb-dev <span class="se">\
</span> libgtk2.0-dev <span class="se">\
</span> libxvidcore-dev <span class="se">\
</span> libx264-dev <span class="se">\
</span> zlib1g-dev <span class="se">\
</span> libxml2-dev <span class="se">\
</span> libxslt1-dev <span class="se">\
</span> libcanberra-gtk-module <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
</code></pre></div></div>
<h3 id="the-app">The App</h3>
<p>This part should be nice and small (unless we drop something big in <code class="language-plaintext highlighter-rouge">requirements.txt</code>) and this is where our app releases should be living for most changes in our deployments.</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">FROM</span><span class="s"> app-base</span>
<span class="k">RUN </span><span class="nb">mkdir </span>app
<span class="k">WORKDIR</span><span class="s"> /app</span>
<span class="k">COPY</span><span class="s"> requirements.txt ./</span>
<span class="k">RUN </span>python3 <span class="nt">-m</span> pip <span class="nb">install</span> <span class="nt">--no-cache-dir</span> <span class="nt">--user</span> <span class="nt">-r</span> requirements.txt
<span class="k">COPY</span><span class="s"> app.py ./</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"#!/bin/bash"</span> <span class="o">>></span> entrypoint.sh <span class="o">&&</span> <span class="se">\
</span> <span class="nb">echo</span> <span class="s2">"python3 app.py </span><span class="se">\$</span><span class="s2">*"</span> <span class="o">>></span> entrypoint.sh <span class="o">&&</span> <span class="se">\
</span> <span class="nb">chmod</span> +x entrypoint.sh
<span class="k">ENTRYPOINT</span><span class="s"> ["sh", "-c", "./entrypoint.sh $*", "--"]</span>
</code></pre></div></div>
<p>The application itself, we’re just going to reuse most of the object detection notebook hooked up to a USB camera</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Code adapted from:
# https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
# License notice for TensorFlow Models and TensorFlow Object Detection
# -------------------------------
# Copyright 2016 The TensorFlow Authors. All rights reserved.
# Licensed under the Apache License, Version 2.0.
#
# Available at
# https://github.com/tensorflow/models/blob/master/LICENSE
</span>
<span class="kn">from</span> <span class="nn">object_detection.utils</span> <span class="kn">import</span> <span class="n">visualization_utils</span> <span class="k">as</span> <span class="n">vis_util</span>
<span class="kn">from</span> <span class="nn">object_detection.utils</span> <span class="kn">import</span> <span class="n">label_map_util</span>
<span class="kn">from</span> <span class="nn">object_detection.utils</span> <span class="kn">import</span> <span class="n">ops</span> <span class="k">as</span> <span class="n">utils_ops</span>
<span class="kn">import</span> <span class="nn">object_detection</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="n">np</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">six.moves.urllib</span> <span class="k">as</span> <span class="n">urllib</span>
<span class="kn">import</span> <span class="nn">sys</span>
<span class="kn">import</span> <span class="nn">tarfile</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="n">tf</span>
<span class="kn">import</span> <span class="nn">zipfile</span>
<span class="kn">from</span> <span class="nn">distutils.version</span> <span class="kn">import</span> <span class="n">StrictVersion</span>
<span class="kn">from</span> <span class="nn">collections</span> <span class="kn">import</span> <span class="n">defaultdict</span>
<span class="kn">from</span> <span class="nn">io</span> <span class="kn">import</span> <span class="n">StringIO</span>
<span class="kn">from</span> <span class="nn">PIL</span> <span class="kn">import</span> <span class="n">Image</span>
<span class="kn">import</span> <span class="nn">argparse</span>
<span class="kn">import</span> <span class="nn">cv2</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="k">if</span> <span class="n">StrictVersion</span><span class="p">(</span><span class="n">tf</span><span class="p">.</span><span class="n">__version__</span><span class="p">)</span> <span class="o"><</span> <span class="n">StrictVersion</span><span class="p">(</span><span class="s">'1.12.0'</span><span class="p">):</span>
<span class="k">raise</span> <span class="nb">ImportError</span><span class="p">(</span>
<span class="s">'Please upgrade your TensorFlow installation to v1.12.*.'</span><span class="p">)</span>
<span class="c1"># What model to download, pick one or find others :)
#MODEL_NAME = 'ssd_mobilenet_v1_coco_2018_01_28'
</span><span class="n">MODEL_NAME</span> <span class="o">=</span> <span class="s">'ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco14_sync_2018_07_03'</span>
<span class="c1">#MODEL_NAME = 'ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03'
#MODEL_NAME = 'ssdlite_mobilenet_v2_coco_2018_05_09'
#MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
</span><span class="n">MODEL_FILE</span> <span class="o">=</span> <span class="n">MODEL_NAME</span> <span class="o">+</span> <span class="s">'.tar.gz'</span>
<span class="n">DOWNLOAD_BASE</span> <span class="o">=</span> <span class="s">'http://download.tensorflow.org/models/object_detection/'</span>
<span class="c1"># Path to frozen detection graph. This is the actual model that is used for the object detection.
</span><span class="n">PATH_TO_FROZEN_GRAPH</span> <span class="o">=</span> <span class="n">MODEL_NAME</span> <span class="o">+</span> <span class="s">'/frozen_inference_graph.pb'</span>
<span class="c1"># List of the strings that is used to add correct label for each box.
</span><span class="n">PATH_TO_LABELS</span> <span class="o">=</span> <span class="n">os</span><span class="p">.</span><span class="n">path</span><span class="p">.</span><span class="n">join</span><span class="p">(</span><span class="s">'/data'</span><span class="p">,</span> <span class="s">'mscoco_label_map.pbtxt'</span><span class="p">)</span>
<span class="n">opener</span> <span class="o">=</span> <span class="n">urllib</span><span class="p">.</span><span class="n">request</span><span class="p">.</span><span class="n">URLopener</span><span class="p">()</span>
<span class="n">opener</span><span class="p">.</span><span class="n">retrieve</span><span class="p">(</span><span class="n">DOWNLOAD_BASE</span> <span class="o">+</span> <span class="n">MODEL_FILE</span><span class="p">,</span> <span class="n">MODEL_FILE</span><span class="p">)</span>
<span class="n">tar_file</span> <span class="o">=</span> <span class="n">tarfile</span><span class="p">.</span><span class="nb">open</span><span class="p">(</span><span class="n">MODEL_FILE</span><span class="p">)</span>
<span class="k">for</span> <span class="nb">file</span> <span class="ow">in</span> <span class="n">tar_file</span><span class="p">.</span><span class="n">getmembers</span><span class="p">():</span>
<span class="n">file_name</span> <span class="o">=</span> <span class="n">os</span><span class="p">.</span><span class="n">path</span><span class="p">.</span><span class="n">basename</span><span class="p">(</span><span class="nb">file</span><span class="p">.</span><span class="n">name</span><span class="p">)</span>
<span class="k">if</span> <span class="s">'frozen_inference_graph.pb'</span> <span class="ow">in</span> <span class="n">file_name</span><span class="p">:</span>
<span class="n">tar_file</span><span class="p">.</span><span class="n">extract</span><span class="p">(</span><span class="nb">file</span><span class="p">,</span> <span class="n">os</span><span class="p">.</span><span class="n">getcwd</span><span class="p">())</span>
<span class="n">detection_graph</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">Graph</span><span class="p">()</span>
<span class="k">with</span> <span class="n">detection_graph</span><span class="p">.</span><span class="n">as_default</span><span class="p">():</span>
<span class="n">od_graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">compat</span><span class="p">.</span><span class="n">v1</span><span class="p">.</span><span class="n">GraphDef</span><span class="p">()</span>
<span class="k">with</span> <span class="n">tf</span><span class="p">.</span><span class="n">io</span><span class="p">.</span><span class="n">gfile</span><span class="p">.</span><span class="n">GFile</span><span class="p">(</span><span class="n">PATH_TO_FROZEN_GRAPH</span><span class="p">,</span> <span class="s">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">fid</span><span class="p">:</span>
<span class="n">serialized_graph</span> <span class="o">=</span> <span class="n">fid</span><span class="p">.</span><span class="n">read</span><span class="p">()</span>
<span class="n">od_graph_def</span><span class="p">.</span><span class="n">ParseFromString</span><span class="p">(</span><span class="n">serialized_graph</span><span class="p">)</span>
<span class="n">tf</span><span class="p">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">od_graph_def</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s">''</span><span class="p">)</span>
<span class="n">category_index</span> <span class="o">=</span> <span class="n">label_map_util</span><span class="p">.</span><span class="n">create_category_index_from_labelmap</span><span class="p">(</span>
<span class="n">PATH_TO_LABELS</span><span class="p">,</span> <span class="n">use_display_name</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">load_image_into_numpy_array</span><span class="p">(</span><span class="n">image</span><span class="p">):</span>
<span class="c1"># (im_width, im_height) = image.size
</span> <span class="n">height</span><span class="p">,</span> <span class="n">width</span><span class="p">,</span> <span class="n">channels</span> <span class="o">=</span> <span class="n">image</span><span class="p">.</span><span class="n">shape</span>
<span class="n">image_np</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">array</span><span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="n">asarray</span><span class="p">(</span><span class="n">image</span><span class="p">)).</span><span class="n">reshape</span><span class="p">(</span>
<span class="p">(</span><span class="n">height</span><span class="p">,</span> <span class="n">width</span><span class="p">,</span> <span class="n">channels</span><span class="p">)).</span><span class="n">astype</span><span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="n">uint8</span><span class="p">)</span>
<span class="k">return</span> <span class="n">image_np</span>
<span class="k">def</span> <span class="nf">run_inferences</span><span class="p">(</span><span class="n">video_capture</span><span class="p">,</span> <span class="n">graph</span><span class="p">):</span>
<span class="k">with</span> <span class="n">graph</span><span class="p">.</span><span class="n">as_default</span><span class="p">():</span>
<span class="k">with</span> <span class="n">tf</span><span class="p">.</span><span class="n">compat</span><span class="p">.</span><span class="n">v1</span><span class="p">.</span><span class="n">Session</span><span class="p">()</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
<span class="c1"># Get handles to input and output tensors
</span> <span class="n">ops</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">compat</span><span class="p">.</span><span class="n">v1</span><span class="p">.</span><span class="n">get_default_graph</span><span class="p">().</span><span class="n">get_operations</span><span class="p">()</span>
<span class="n">all_tensor_names</span> <span class="o">=</span> <span class="p">{</span>
<span class="n">output</span><span class="p">.</span><span class="n">name</span> <span class="k">for</span> <span class="n">op</span> <span class="ow">in</span> <span class="n">ops</span> <span class="k">for</span> <span class="n">output</span> <span class="ow">in</span> <span class="n">op</span><span class="p">.</span><span class="n">outputs</span><span class="p">}</span>
<span class="n">tensor_dict</span> <span class="o">=</span> <span class="p">{}</span>
<span class="k">for</span> <span class="n">key</span> <span class="ow">in</span> <span class="p">[</span>
<span class="s">'num_detections'</span><span class="p">,</span> <span class="s">'detection_boxes'</span><span class="p">,</span> <span class="s">'detection_scores'</span><span class="p">,</span>
<span class="s">'detection_classes'</span><span class="p">,</span> <span class="s">'detection_masks'</span>
<span class="p">]:</span>
<span class="n">tensor_name</span> <span class="o">=</span> <span class="n">key</span> <span class="o">+</span> <span class="s">':0'</span>
<span class="k">if</span> <span class="n">tensor_name</span> <span class="ow">in</span> <span class="n">all_tensor_names</span><span class="p">:</span>
<span class="n">tensor_dict</span><span class="p">[</span><span class="n">key</span><span class="p">]</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">compat</span><span class="p">.</span><span class="n">v1</span><span class="p">.</span><span class="n">get_default_graph</span><span class="p">().</span><span class="n">get_tensor_by_name</span><span class="p">(</span><span class="n">tensor_name</span><span class="p">)</span>
<span class="k">if</span> <span class="s">'detection_masks'</span> <span class="ow">in</span> <span class="n">tensor_dict</span><span class="p">:</span>
<span class="c1"># The following processing is only for single image
</span> <span class="n">detection_boxes</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">squeeze</span><span class="p">(</span>
<span class="n">tensor_dict</span><span class="p">[</span><span class="s">'detection_boxes'</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="n">detection_masks</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">squeeze</span><span class="p">(</span>
<span class="n">tensor_dict</span><span class="p">[</span><span class="s">'detection_masks'</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="c1"># Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
</span> <span class="n">real_num_detection</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">cast</span><span class="p">(</span>
<span class="n">tensor_dict</span><span class="p">[</span><span class="s">'num_detections'</span><span class="p">][</span><span class="mi">0</span><span class="p">],</span> <span class="n">tf</span><span class="p">.</span><span class="n">int32</span><span class="p">)</span>
<span class="n">detection_boxes</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="nb">slice</span><span class="p">(</span><span class="n">detection_boxes</span><span class="p">,</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="n">real_num_detection</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">])</span>
<span class="n">detection_masks</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="nb">slice</span><span class="p">(</span><span class="n">detection_masks</span><span class="p">,</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="n">real_num_detection</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">])</span>
<span class="n">detection_masks_reframed</span> <span class="o">=</span> <span class="n">utils_ops</span><span class="p">.</span><span class="n">reframe_box_masks_to_image_masks</span><span class="p">(</span>
<span class="n">detection_masks</span><span class="p">,</span> <span class="n">detection_boxes</span><span class="p">,</span> <span class="n">image</span><span class="p">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">image</span><span class="p">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">2</span><span class="p">])</span>
<span class="n">detection_masks_reframed</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">cast</span><span class="p">(</span><span class="n">tf</span><span class="p">.</span><span class="n">greater</span><span class="p">(</span><span class="n">detection_masks_reframed</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span> <span class="n">tf</span><span class="p">.</span><span class="n">uint8</span><span class="p">)</span>
<span class="c1"># Follow the convention by adding back the batch dimension
</span> <span class="n">tensor_dict</span><span class="p">[</span><span class="s">'detection_masks'</span><span class="p">]</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">detection_masks_reframed</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">image_tensor</span> <span class="o">=</span> <span class="n">tf</span><span class="p">.</span><span class="n">get_default_graph</span><span class="p">().</span><span class="n">get_tensor_by_name</span><span class="p">(</span><span class="s">'image_tensor:0'</span><span class="p">)</span>
<span class="k">if</span> <span class="n">video_capture</span><span class="p">.</span><span class="n">isOpened</span><span class="p">():</span>
<span class="n">windowName</span> <span class="o">=</span> <span class="s">"Jetson TensorFlow Demo"</span>
<span class="n">width</span> <span class="o">=</span> <span class="mi">1280</span>
<span class="n">height</span> <span class="o">=</span> <span class="mi">720</span>
<span class="n">cv2</span><span class="p">.</span><span class="n">namedWindow</span><span class="p">(</span><span class="n">windowName</span><span class="p">,</span> <span class="n">cv2</span><span class="p">.</span><span class="n">WINDOW_NORMAL</span><span class="p">)</span>
<span class="n">cv2</span><span class="p">.</span><span class="n">resizeWindow</span><span class="p">(</span><span class="n">windowName</span><span class="p">,</span> <span class="n">width</span><span class="p">,</span> <span class="n">height</span><span class="p">)</span>
<span class="n">cv2</span><span class="p">.</span><span class="n">moveWindow</span><span class="p">(</span><span class="n">windowName</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">cv2</span><span class="p">.</span><span class="n">setWindowTitle</span><span class="p">(</span><span class="n">windowName</span><span class="p">,</span> <span class="s">"Jetson TensorFlow Demo"</span><span class="p">)</span>
<span class="n">font</span> <span class="o">=</span> <span class="n">cv2</span><span class="p">.</span><span class="n">FONT_HERSHEY_PLAIN</span>
<span class="n">showFullScreen</span> <span class="o">=</span> <span class="bp">False</span>
<span class="k">while</span> <span class="n">cv2</span><span class="p">.</span><span class="n">getWindowProperty</span><span class="p">(</span><span class="n">windowName</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span> <span class="o">>=</span> <span class="mi">0</span><span class="p">:</span>
<span class="n">ret_val</span><span class="p">,</span> <span class="n">frame</span> <span class="o">=</span> <span class="n">video_capture</span><span class="p">.</span><span class="n">read</span><span class="p">()</span>
<span class="c1"># the array based representation of the image will be used later in order to prepare the
</span> <span class="c1"># result image with boxes and labels on it.
</span> <span class="n">image_np</span> <span class="o">=</span> <span class="n">load_image_into_numpy_array</span><span class="p">(</span><span class="n">frame</span><span class="p">)</span>
<span class="c1"># Expand dimensions since the model expects images to have shape: [1, None, None, 3]
</span> <span class="n">image_np_expanded</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">image_np</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
<span class="c1"># Actual detection.
</span>
<span class="n">output_dict</span> <span class="o">=</span> <span class="n">sess</span><span class="p">.</span><span class="n">run</span><span class="p">(</span><span class="n">tensor_dict</span><span class="p">,</span> <span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">image_tensor</span><span class="p">:</span> <span class="n">image_np_expanded</span><span class="p">})</span>
<span class="c1"># all outputs are float32 numpy arrays, so convert types as appropriate
</span> <span class="n">output_dict</span><span class="p">[</span><span class="s">'num_detections'</span><span class="p">]</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">output_dict</span><span class="p">[</span><span class="s">'num_detections'</span><span class="p">][</span><span class="mi">0</span><span class="p">])</span>
<span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_classes'</span><span class="p">]</span> <span class="o">=</span> <span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_classes'</span><span class="p">][</span><span class="mi">0</span><span class="p">].</span><span class="n">astype</span><span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="n">uint8</span><span class="p">)</span>
<span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_boxes'</span><span class="p">]</span> <span class="o">=</span> <span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_boxes'</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>
<span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_scores'</span><span class="p">]</span> <span class="o">=</span> <span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_scores'</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>
<span class="k">if</span> <span class="s">'detection_masks'</span> <span class="ow">in</span> <span class="n">output_dict</span><span class="p">:</span>
<span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_masks'</span><span class="p">]</span> <span class="o">=</span> <span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_masks'</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>
<span class="c1"># Visualization of the results of a detection.
</span> <span class="n">vis_util</span><span class="p">.</span><span class="n">visualize_boxes_and_labels_on_image_array</span><span class="p">(</span>
<span class="n">image_np</span><span class="p">,</span>
<span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_boxes'</span><span class="p">],</span>
<span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_classes'</span><span class="p">],</span>
<span class="n">output_dict</span><span class="p">[</span><span class="s">'detection_scores'</span><span class="p">],</span>
<span class="n">category_index</span><span class="p">,</span>
<span class="n">instance_masks</span><span class="o">=</span><span class="n">output_dict</span><span class="p">.</span><span class="n">get</span><span class="p">(</span><span class="s">'detection_masks'</span><span class="p">),</span>
<span class="n">use_normalized_coordinates</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span>
<span class="n">line_thickness</span><span class="o">=</span><span class="mi">8</span><span class="p">)</span>
<span class="n">displayBuf</span> <span class="o">=</span> <span class="n">cv2</span><span class="p">.</span><span class="n">resize</span><span class="p">(</span><span class="n">image_np</span><span class="p">,</span> <span class="p">(</span><span class="n">width</span><span class="p">,</span> <span class="n">height</span><span class="p">))</span>
<span class="n">cv2</span><span class="p">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">windowName</span><span class="p">,</span> <span class="n">displayBuf</span><span class="p">)</span>
<span class="n">key</span> <span class="o">=</span> <span class="n">cv2</span><span class="p">.</span><span class="n">waitKey</span><span class="p">(</span><span class="mi">10</span><span class="p">)</span>
<span class="k">if</span> <span class="n">key</span> <span class="o">==</span> <span class="o">-</span><span class="mi">1</span><span class="p">:</span>
<span class="k">continue</span>
<span class="k">elif</span> <span class="n">key</span> <span class="o">==</span> <span class="mi">27</span><span class="p">:</span>
<span class="k">break</span>
<span class="k">elif</span> <span class="n">key</span> <span class="o">==</span> <span class="nb">ord</span><span class="p">(</span><span class="s">'f'</span><span class="p">):</span>
<span class="k">if</span> <span class="n">showFullScreen</span> <span class="o">==</span> <span class="bp">False</span><span class="p">:</span>
<span class="n">cv2</span><span class="p">.</span><span class="n">setWindowProperty</span><span class="p">(</span>
<span class="n">windowName</span><span class="p">,</span> <span class="n">cv2</span><span class="p">.</span><span class="n">WND_PROP_FULLSCREEN</span><span class="p">,</span> <span class="n">cv2</span><span class="p">.</span><span class="n">WINDOW_FULLSCREEN</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">cv2</span><span class="p">.</span><span class="n">setWindowProperty</span><span class="p">(</span>
<span class="n">windowName</span><span class="p">,</span> <span class="n">cv2</span><span class="p">.</span><span class="n">WND_PROP_FULLSCREEN</span><span class="p">,</span> <span class="n">cv2</span><span class="p">.</span><span class="n">WINDOW_NORMAL</span><span class="p">)</span>
<span class="n">showFullScreen</span> <span class="o">=</span> <span class="ow">not</span> <span class="n">showFullScreen</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">print</span><span class="p">(</span><span class="s">"Failed to open the camera."</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">parse_cli_args</span><span class="p">():</span>
<span class="n">parser</span> <span class="o">=</span> <span class="n">argparse</span><span class="p">.</span><span class="n">ArgumentParser</span><span class="p">()</span>
<span class="n">group</span> <span class="o">=</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_mutually_exclusive_group</span><span class="p">()</span>
<span class="n">group</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">"--capture_index"</span><span class="p">,</span> <span class="n">dest</span><span class="o">=</span><span class="s">"capture_index"</span><span class="p">,</span>
<span class="n">help</span><span class="o">=</span><span class="s">"Video device # of USB webcam (/dev/video?) [0]"</span><span class="p">,</span>
<span class="n">default</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">)</span>
<span class="n">arguments</span> <span class="o">=</span> <span class="n">parser</span><span class="p">.</span><span class="n">parse_args</span><span class="p">()</span>
<span class="k">return</span> <span class="n">arguments</span>
<span class="k">if</span> <span class="n">__name__</span> <span class="o">==</span> <span class="s">'__main__'</span><span class="p">:</span>
<span class="n">arguments</span> <span class="o">=</span> <span class="n">parse_cli_args</span><span class="p">()</span>
<span class="k">print</span><span class="p">(</span><span class="s">"Called with args:"</span><span class="p">)</span>
<span class="k">print</span><span class="p">(</span><span class="n">arguments</span><span class="p">)</span>
<span class="k">print</span><span class="p">(</span><span class="s">"OpenCV version: {}"</span><span class="p">.</span><span class="nb">format</span><span class="p">(</span><span class="n">cv2</span><span class="p">.</span><span class="n">__version__</span><span class="p">))</span>
<span class="k">print</span><span class="p">(</span><span class="s">"Capture Index:"</span><span class="p">,</span> <span class="n">arguments</span><span class="p">.</span><span class="n">capture_index</span><span class="p">)</span>
<span class="n">video_capture</span> <span class="o">=</span> <span class="n">cv2</span><span class="p">.</span><span class="n">VideoCapture</span><span class="p">(</span><span class="n">arguments</span><span class="p">.</span><span class="n">capture_index</span><span class="p">)</span>
<span class="n">run_inferences</span><span class="p">(</span><span class="n">video_capture</span><span class="p">,</span> <span class="n">detection_graph</span><span class="p">)</span>
<span class="n">video_capture</span><span class="p">.</span><span class="n">release</span><span class="p">()</span>
<span class="n">cv2</span><span class="p">.</span><span class="n">destroyAllWindows</span><span class="p">()</span>
</code></pre></div></div>
<h3 id="assembling-the-pieces">Assembling the Pieces</h3>
<p>Assuming you’ve cloned the <a href="https://github.com/idavis/jetson-containers">jetson-containers</a> repository:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make build-32.2-nano-dev-jetpack-4.2.1-tensorflow-zoo-min
<span class="c"># or</span>
make build-32.2-nano-dev-jetpack-4.2.1-tensorflow-zoo-min-full
<span class="c"># or</span>
make build-32.2-nano-dev-jetpack-4.2.1-tensorflow-zoo-devel
</code></pre></div></div>
<p>In a terminal, on the device, lets open X11 forwarding from docker containers</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>xhost +local:docker
</code></pre></div></div>
<p>Now we can run the container image and see boxes and confidence scores rendered to the screen. Press <code class="language-plaintext highlighter-rouge">f</code> for full screen or <code class="language-plaintext highlighter-rouge">esc</code> to exit.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># only using --privileged here for brevity</span>
user@nano-dev:~/<span class="nv">$ </span>docker run <span class="nt">--rm</span> <span class="nt">-it</span> <span class="nt">--privileged</span> <span class="nt">--net</span><span class="o">=</span>host <span class="nt">-e</span> <span class="s2">"DISPLAY"</span> l4t:32.2-nano-dev-jetpack-4.2.1-tensorflow-zoo-min
<span class="c"># or</span>
user@nano-dev:~/<span class="nv">$ </span>docker run <span class="nt">--rm</span> <span class="nt">-it</span> <span class="nt">--privileged</span> <span class="nt">--net</span><span class="o">=</span>host <span class="nt">-e</span> <span class="s2">"DISPLAY"</span> l4t:32.2-nano-dev-jetpack-4.2.1-tensorflow-zoo-min-full
<span class="c"># or</span>
user@nano-dev:~/<span class="nv">$ </span>docker run <span class="nt">--rm</span> <span class="nt">-it</span> <span class="nt">--privileged</span> <span class="nt">--net</span><span class="o">=</span>host <span class="nt">-e</span> <span class="s2">"DISPLAY"</span> l4t:32.2-nano-dev-jetpack-4.2.1-tensorflow-zoo-devel
</code></pre></div></div>Jetson Containers - Building Root Filesystems With Desktop UI Support2019-08-14T18:00:00+00:002019-08-14T18:00:00+00:00http://codepyre.com/2019/08/building-root-filesystems-with-desktop-ui-support<h1 id="prologue">Prologue</h1>
<p>This post is part of a series covering the NVIDIA Jetson platform. It may help to take a peek through the other posts beforehand.</p>
<ul>
<li><a href="/2019/07/jetson-containers-introduction">Introduction</a></li>
<li><a href="/2019/07/jetson-containers-samples">Building the Samples</a></li>
<li><a href="/2019/07/maximizing-jetson-nano-storage">Maximizing Jetson Nano Dev Kit Storage</a></li>
<li><a href="/2019/07/pushing-images-to-devices">Pushing Images to Devices</a></li>
<li><a href="/2019/07/building-deepstream-images">Building DeepStream Images</a></li>
<li><a href="/2019/08/building-for-cti-devices">Building for CTI Devices</a></li>
<li><a href="/2019/08/building-custom-root-filesystems">Building Custom Root Filesystems</a></li>
</ul>
<h1 id="introduction">Introduction</h1>
<p>This post builds off of <a href="/2019/08/building-custom-root-filesystems">Building Custom Root Filesystems</a> directly and it is highly recommended that you review that post first as it covers background information that isn’t reiterated here.</p>
<p>To this end we have only a few steps needed to create our root filesystem:</p>
<ol>
<li><a href="#debootstrap">Debootstrap</a></li>
<li><a href="#configuration">Configuration</a></li>
<li><a href="#flashing-a-evice">Flashing a Device</a></li>
</ol>
<h1 id="debootstrap">Debootstrap</h1>
<p><a href="https://wiki.debian.org/Debootstrap">debootstrap</a> is a system tool which installs a Debian based system into a subdirectory on an already existing Debian-based OS. This allows us to create a base (minimal) distribution from which to grow our <code class="language-plaintext highlighter-rouge">rootfs</code>. Since we are building for a foreign architecture, we also need some supporting utilities.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Install dependencies</span>
<span class="nb">sudo </span>apt <span class="nb">install </span>qemu qemu-user-static binfmt-support debootstrap
<span class="c"># Optional: configure locales so that qemu chroots can access them.</span>
<span class="nb">sudo </span>dpkg-reconfigure locales
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">qemu-debootstrap</code> application was installed by <code class="language-plaintext highlighter-rouge">qemu-user-static</code> and automatically runs <code class="language-plaintext highlighter-rouge">chroot ./rootfs /debootstrap/debootstrap --second-stage</code> when building foreign architecture roots. This step fully configures the packages in the new base distribution.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">mkdir </span>rootfs
<span class="nb">sudo </span>qemu-debootstrap <span class="nt">--arch</span> arm64 bionic ./rootfs
</code></pre></div></div>
<p>With that, we can now setup the <a href="#chroot-setup">chroot environment</a>.</p>
<h1 id="configuration">Configuration</h1>
<h2 id="chroot-setup">Chroot Setup</h2>
<p>We need to set up the binds and copy the <code class="language-plaintext highlighter-rouge">qemu-aarch64-static</code> file into our new root.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd </span>rootfs
<span class="nb">sudo cp</span> /usr/bin/qemu-aarch64-static usr/bin/
<span class="nb">sudo </span>mount <span class="nt">--bind</span> /dev/ dev/
<span class="nb">sudo </span>mount <span class="nt">--bind</span> /dev/pts/ dev/pts/
<span class="nb">sudo </span>mount <span class="nt">--bind</span> /sys/ sys/
<span class="nb">sudo </span>mount <span class="nt">--bind</span> /proc/ proc/
</code></pre></div></div>
<p>Now we can enter the <code class="language-plaintext highlighter-rouge">chroot</code> running <code class="language-plaintext highlighter-rouge">bash</code>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo chroot</span> <span class="nb">.</span> /bin/bash
</code></pre></div></div>
<h2 id="installing-ubuntu-desktop">Installing Ubuntu Desktop</h2>
<p>In setting up the desktop environment we need to set up our <code class="language-plaintext highlighter-rouge">locale</code>. Feel free to enter your own <code class="language-plaintext highlighter-rouge">locale</code> for the <code class="language-plaintext highlighter-rouge">locale-gen</code> tool. The <code class="language-plaintext highlighter-rouge">--no-install-recommends</code> will give us the minimal default desktop environment and is missing web browsers, productivity tools, games, and many other things that we don’t want. The <code class="language-plaintext highlighter-rouge">oem-config-gtk</code> is the GTK+ frontend for the NVIDIA post-flash UI configuration and automatically removes most of its dependencies as part of the system setup.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>locale-gen en_US.UTF-8
apt update
apt <span class="nb">install </span>ubuntu-desktop oem-config-gtk <span class="nt">-y</span> <span class="nt">--no-install-recommends</span>
</code></pre></div></div>
<p>That’s it! You can follow <a href="/2019/08/building-custom-root-filesystems/#custom-applications">adding custom applications</a> from the <a href="/2019/08/building-custom-root-filesystems">Building Custom Root Filesystems</a> post if you wish to see how to add Azure IoT Edge, OpenSSH Server, or other applications to your root filesystem.</p>
<h2 id="wrapping-it-up">Wrapping It Up</h2>
<p>Once <code class="language-plaintext highlighter-rouge">rootfs</code> customization is complete, <code class="language-plaintext highlighter-rouge">exit</code> the <code class="language-plaintext highlighter-rouge">chroot</code>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">exit</span>
</code></pre></div></div>
<p>And now unmount and clean everything up. Leaving these around is a bad idea ;)</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>umount ./dev/pts
<span class="nb">sudo </span>umount ./dev
<span class="nb">sudo </span>umount ./sys
<span class="nb">sudo </span>umount ./proc
<span class="nb">sudo rm </span>usr/bin/qemu-aarch64-static
</code></pre></div></div>
<p>Remove extra files left around from the mounts and installation of packages.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo rm</span> <span class="nt">-rf</span> var/lib/apt/lists/<span class="k">*</span>
<span class="nb">sudo rm</span> <span class="nt">-rf</span> dev/<span class="k">*</span>
<span class="nb">sudo rm</span> <span class="nt">-rf</span> var/log/<span class="k">*</span>
<span class="nb">sudo rm</span> <span class="nt">-rf</span> var/tmp/<span class="k">*</span>
<span class="nb">sudo rm</span> <span class="nt">-rf</span> var/cache/apt/archives/<span class="k">*</span>.deb
<span class="nb">sudo rm</span> <span class="nt">-rf</span> tmp/<span class="k">*</span>
</code></pre></div></div>
<p>Finally, with everything configured, cleaned up, and ready, we can create the archive.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo tar</span> <span class="nt">-jcpf</span> ../ubuntu_bionic_desktop_aarch64.tbz2 <span class="nb">.</span>
</code></pre></div></div>
<h1 id="flashing-a-device">Flashing a Device</h1>
<p>Note: For the UI commands it is assumed that the <a href="https://github.com/idavis/jetson-containers">jetson-containers</a> repository is open in VS Code.</p>
<h2 id="creating-the-filesystem-dependencies-image">Creating the Filesystem Dependencies Image</h2>
<p>Once you’ve archived the <code class="language-plaintext highlighter-rouge">rootfs</code>, we need to create the <code class="language-plaintext highlighter-rouge">ROOT_FS_ARCHIVE</code> in your <code class="language-plaintext highlighter-rouge">.env</code> to the location of your archive, for example: <code class="language-plaintext highlighter-rouge">ROOT_FS_ARCHIVE=/home/<user>/dev/archives/ubuntu_bionic_desktop_aarch64.tbz2</code>. Be careful in that this folder is used as the build context and it all will be loaded into the build (so don’t use <code class="language-plaintext highlighter-rouge">/tmp</code>).</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code>, select <code class="language-plaintext highlighter-rouge">make <rootfs from file></code>, enter the name of the final container image you’d like, such as <code class="language-plaintext highlighter-rouge">ubuntu_bionic_desktop_aarch64</code>, press <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make from-file-rootfs-ubuntu_bionic_desktop_aarch64
</code></pre></div></div>
<p>Once the build is complete you should see:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">-f</span> <span class="s2">"rootfs-from-file.Dockerfile"</span> <span class="nt">-t</span> <span class="s2">"l4t:ubuntu_bionic_desktop_aarch64-rootfs"</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_FS</span><span class="o">=</span>ubuntu_bionic_desktop_aarch64.tbz2 <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_FS_SHA</span><span class="o">=</span>8c0a025618fcedabed62e41c238ba49a0c34cf5e <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">VERSION_ID</span><span class="o">=</span>bionic-20190307 <span class="se">\</span>
<span class="nb">.</span>
<span class="c"># ...</span>
Successfully tagged l4t:ubuntu_bionic_desktop_aarch64-rootfs
</code></pre></div></div>
<p>In addition, there will be a new file named after your build <code class="language-plaintext highlighter-rouge">flash/rootfs/ubuntu_bionic_desktop_aarch64.tbz2.conf</code> which contains the environmental information you need to use it in the <code class="language-plaintext highlighter-rouge">.env</code> during a build:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">ROOT_FS</span><span class="o">=</span>ubuntu_bionic_desktop_aarch64.tbz2
<span class="nv">ROOT_FS_SHA</span><span class="o">=</span>8c0a025618fcedabed62e41c238ba49a0c34cf5e
<span class="nv">FS_DEPENDENCIES_IMAGE</span><span class="o">=</span>l4t:ubuntu_bionic_desktop_aarch64-rootfs
</code></pre></div></div>
<h2 id="configuring-the-build">Configuring the Build</h2>
<p>Open your <code class="language-plaintext highlighter-rouge">.env</code> file and copy the contents of the <code class="language-plaintext highlighter-rouge">.conf</code> file created above. The <code class="language-plaintext highlighter-rouge">FS_DEPENDENCIES_IMAGE</code> overrides the default file system image used when building the flashing container. <code class="language-plaintext highlighter-rouge">ROOT_FS</code> tells the build which file to pull from the image and it will be checked against the <code class="language-plaintext highlighter-rouge">ROOT_FS_SHA</code>.</p>
<p>With this set, we are ready to build the flashing container.</p>
<h2 id="building-the-flashing-image">Building the Flashing Image</h2>
<p>Note: We’re going to start off with nano (nano-dev) but you can also run Xavier/TX2 builds here (just substitute the text nano-dev for jax/tx2).</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code> which will drop down a build task list. Select <code class="language-plaintext highlighter-rouge">make <imaging options></code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>, select <code class="language-plaintext highlighter-rouge">32.2-nano-dev-jetpack-4.2.1</code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make image-32.2-nano-dev-jetpack-4.2.1
</code></pre></div></div>
<p>Which will run the build with our new root filesystem in place:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">--squash</span> <span class="nt">-f</span> /home/<user>/jetson-containers/flash/l4t/32.2/default.Dockerfile <span class="nt">-t</span> l4t:32.2-nano-dev-jetpack-4.2.1-image <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">DEPENDENCIES_IMAGE</span><span class="o">=</span>l4t:32.2-nano-dev-jetpack-4.2.1-deps <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">DRIVER_PACK</span><span class="o">=</span>Jetson-210_Linux_R32.2.0_aarch64.tbz2 <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">DRIVER_PACK_SHA</span><span class="o">=</span>2d60f126a3ecf55269c486b4b0ca684448f2ca7d <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">FS_DEPENDENCIES_IMAGE</span><span class="o">=</span>l4t:ubuntu-bionic-desktop-aarch64-rootfs <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_FS</span><span class="o">=</span>ubuntu_bionic_desktop_aarch64.tbz2 <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_FS_SHA</span><span class="o">=</span>8c0a025618fcedabed62e41c238ba49a0c34cf5e <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">BSP_DEPENDENCIES_IMAGE</span><span class="o">=</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">BSP</span><span class="o">=</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">BSP_SHA</span><span class="o">=</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">TARGET_BOARD</span><span class="o">=</span>jetson-nano-qspi-sd <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_DEVICE</span><span class="o">=</span>mmcblk0p1 <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">VERSION_ID</span><span class="o">=</span>bionic-20190307 <span class="se">\</span>
<span class="nb">.</span>
<span class="c">#...</span>
Successfully tagged l4t:32.2-nano-dev-jetpack-4.2.1-image
</code></pre></div></div>
<p>We can see the built image which is nice and small compared to the default:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>docker images
REPOSITORY TAG SIZE
l4t 32.2-nano-dev-jetpack-4.2.1-image 1.76GB
</code></pre></div></div>
<h2 id="flashing-the-device">Flashing the Device</h2>
<p>Set your jumpers for flashing, cycle the power or reboot the device. Ensure that it shows up when you run <code class="language-plaintext highlighter-rouge">lsusb</code> (there will be a device with <code class="language-plaintext highlighter-rouge">Nvidia Corp</code> in the line):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>lsusb
<span class="c">#...</span>
Bus 001 Device 069: ID 0955:7020 NVidia Corp.
<span class="c">#...</span>
</code></pre></div></div>
<p>Now that the device is ready, we can flash it (we’re assuming production module size of <code class="language-plaintext highlighter-rouge">16GB/14GiB</code> and not overriding the rootfs size):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>./flash/flash.sh l4t:32.2-nano-dev-jetpack-4.2.1-image
</code></pre></div></div>
<p>The device should reboot automatically once flashed. Once completed, it will begin the UI configuration process where you’ll create a user account and eventually log into the system.</p>
<h1 id="summary">Summary</h1>
<p>This post showed you how to create a minimum Ubuntu Desktop installation for your Jetson Nano Dev Kit (or other device) creating a <code class="language-plaintext highlighter-rouge">1.5GB</code> host OS footprint. This gives us a lot more room on the <code class="language-plaintext highlighter-rouge">eMMC</code> to run our containerized workloads.</p>
<p>To test out the new desktop environment, you can <a href="/2019/07/pushing-images-to-devices">push</a> the <a href="/2019/07/jetson-containers-samples">samples</a> to the device.</p>Jetson Containers - Building Custom Root Filesystems2019-08-07T18:00:00+00:002019-08-07T18:00:00+00:00http://codepyre.com/2019/08/building-custom-root-filesystems<h1 id="prologue">Prologue</h1>
<p>This post is part of a series covering the NVIDIA Jetson platform. It may help to take a peek through the other posts beforehand.</p>
<ul>
<li><a href="/2019/07/jetson-containers-introduction">Introduction</a></li>
<li><a href="/2019/07/jetson-containers-samples">Building the Samples</a></li>
<li><a href="/2019/07/maximizing-jetson-nano-storage">Maximizing Jetson Nano Dev Kit Storage</a></li>
<li><a href="/2019/07/pushing-images-to-devices">Pushing Images to Devices</a></li>
<li><a href="/2019/07/building-deepstream-images">Building DeepStream Images</a></li>
<li><a href="/2019/08/building-for-cti-devices">Building for CTI Devices</a></li>
</ul>
<h1 id="introduction">Introduction</h1>
<p>The sample file system provided by NVIDIA provides a quick path to getting started with the Jetson platform. As one begins to develop for the platform, they should be looking at building a custom root filesystem for their deployments. While not exhaustive, there are several reasons to do this:</p>
<ul>
<li>The <code class="language-plaintext highlighter-rouge">eMMC</code>s in the modules are small</li>
<li>Production deployments likely don’t need X11 or Wayland</li>
<li>Security</li>
<li>Component compatibility</li>
<li>Small host updates</li>
</ul>
<p>It is key to remember the distinction between the board support packages (BSPs) applied to the root filesystem vs the root filesystem itself. The root filesystem is the OS base on which the BSP is applied. Any applications, invariant configuration, and general OS setup are handled here. Further configuration can be done at a later point applying certificates, credentials, and other device specific configuration. This later configuration can also be done via automation tools such as Ansible, Terraform, Puppet, or PowerShell DSC.</p>
<p>To this end we can review several options for building out these root filesystems:</p>
<ol>
<li><a href="#nvidia-Sample">NVIDIA Sample</a></li>
<li><a href="#debootstrap">debootstrap</a></li>
<li><a href="#ubuntu-releases">Ubuntu Releases</a></li>
</ol>
<h1 id="setting-up-the-base">Setting up the Base</h1>
<p>The work done is performed on an <code class="language-plaintext highlighter-rouge">x86_64</code> host machine building images for the foreign <code class="language-plaintext highlighter-rouge">arm64/aarch64</code> architecture.</p>
<h2 id="nvidia-sample">NVIDIA Sample</h2>
<p>The NVIDIA sample is based on Ubuntu 18.04 LTS and set up to let a user configure the host OS after flashing. This gives a full <a href="https://en.wikipedia.org/wiki/Unity_(user_interface)">Unity</a> graphical shell. This base ends up being around <code class="language-plaintext highlighter-rouge">5.5GB</code> which is quite large when we’re likely using <code class="language-plaintext highlighter-rouge">16GB-32GB</code> <code class="language-plaintext highlighter-rouge">eMMC</code>s.</p>
<p>This rootfs should only be used for experimentation and won’t be covered from here on.</p>
<h2 id="debootstrap">Debootstrap</h2>
<p><a href="https://wiki.debian.org/Debootstrap">debootstrap</a> is a system tool which installs a Debian based system into a subdirectory on an already existing Debian-based OS. This allows us to create a base (minimal) distribution from which to grow our <code class="language-plaintext highlighter-rouge">rootfs</code>. Since we are building for a foreign architecture, we also need some supporting utilities.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Install dependencies</span>
<span class="nb">sudo </span>apt <span class="nb">install </span>qemu qemu-user-static binfmt-support debootstrap
<span class="c"># Optional: configure locales so that qemu chroots can access them.</span>
<span class="nb">sudo </span>dpkg-reconfigure locales
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">qemu-debootstrap</code> application was installed by <code class="language-plaintext highlighter-rouge">qemu-user-static</code> and automatically runs <code class="language-plaintext highlighter-rouge">chroot ./rootfs /debootstrap/debootstrap --second-stage</code> when building foreign architecture roots. This step fully configures the packages in the new base distribution.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">mkdir </span>rootfs
<span class="nb">sudo </span>qemu-debootstrap <span class="nt">--arch</span> arm64 bionic ./rootfs
</code></pre></div></div>
<p>With that, we can now setup the <a href="#chroot-there-it-is">chroot environment</a>.</p>
<h2 id="ubuntu-releases">Ubuntu Releases</h2>
<p>Browse the <a href="http://cdimage.ubuntu.com/ubuntu-base/releases">ubuntu releases</a> and find an <code class="language-plaintext highlighter-rouge">*arm64.tar.gz</code> that works for you. For this example, I’m using one of the <code class="language-plaintext highlighter-rouge">18.04.1</code> base root filesystems. In the releases folder you’ll find the <code class="language-plaintext highlighter-rouge">SHASUMS</code> file which can be used to verify your download.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget http://cdimage.ubuntu.com/ubuntu-base/releases/18.04.2/release/ubuntu-base-18.04.1-base-arm64.tar.gz
echo "26c4029b7b99af5a7031d3da58e7e7c3de65c64a *./ubuntu-base-18.04.1-base-arm64.tar.gz" | sha1sum -c --strict -
# Should say OK
mkdir rootfs
sudo tar -xpvf ubuntu-base-18.04.1-base-arm64.tar.gz -C ./rootfs
</code></pre></div></div>
<p>This method will require a couple of additional setup steps outlined below with <code class="language-plaintext highlighter-rouge">#cdimage-release-only:</code>.</p>
<p>With that, we can now setup the <a href="#chroot-there-it-is">chroot environment</a>.</p>
<h1 id="chroot-there-it-is">Chroot! There It Is</h1>
<h2 id="setup">Setup</h2>
<p>With <code class="language-plaintext highlighter-rouge">debootstrap</code> (or release archive) having populated a minimal os installation in the <code class="language-plaintext highlighter-rouge">./rootfs</code> folder, we can leverage <a href="https://wiki.archlinux.org/index.php/Chroot">chroot</a> to change the apparent root directory allowing us to run <code class="language-plaintext highlighter-rouge">apt-get</code> and other applications in the <code class="language-plaintext highlighter-rouge">./rootfs</code> folder isolated from the rest of the host.</p>
<p>Like the container images, we are still sharing the host kernel and <code class="language-plaintext highlighter-rouge">binfmt-support</code> is telling us that that we should be using <code class="language-plaintext highlighter-rouge">/usr/bin/qemu-aarch64-static</code> to interpret arm64 instructions. So we’ll copy it from the host OS into the <code class="language-plaintext highlighter-rouge">./rootfs</code> folders tree. Additionally we need to mount some folders into the new root.</p>
<p>Note: If you have chosen to use the cdimage releases we also have to copy around the <code class="language-plaintext highlighter-rouge">resolv.conf</code> in order to configure DNS resolution while we’re in the <code class="language-plaintext highlighter-rouge">chroot</code>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd </span>rootfs
<span class="nb">sudo cp</span> /usr/bin/qemu-aarch64-static usr/bin/
<span class="nb">sudo </span>mount <span class="nt">--bind</span> /dev/ dev/
<span class="nb">sudo </span>mount <span class="nt">--bind</span> /sys/ sys/
<span class="nb">sudo </span>mount <span class="nt">--bind</span> /proc/ proc/
<span class="c">#cdimage-release-only:</span>
<span class="nb">sudo cp</span> /etc/resolv.conf etc/resolv.conf.host
<span class="nb">sudo mv </span>etc/resolv.conf etc/resolv.conf.saved
<span class="nb">sudo mv </span>etc/resolv.conf.host etc/resolv.conf
</code></pre></div></div>
<p>Now we can enter the <code class="language-plaintext highlighter-rouge">chroot</code> running <code class="language-plaintext highlighter-rouge">bash</code>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span><span class="nv">LC_ALL</span><span class="o">=</span>C <span class="nv">LANG</span><span class="o">=</span>C.UTF-8 <span class="nb">chroot</span> <span class="nb">.</span> /bin/bash
</code></pre></div></div>
<h2 id="configuration">Configuration</h2>
<p>The <a href="https://help.ubuntu.com/community/NetworkManager">network-manager</a> package will configure our network connectivity and transitions between networks.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt update
apt <span class="nb">install </span>network-manager <span class="nt">-y</span> <span class="nt">--no-install-recommends</span>
</code></pre></div></div>
<p>If you want to add other feeds you can add them like this <code class="language-plaintext highlighter-rouge">echo "deb http://ports.ubuntu.com/ubuntu-ports $(lsb_release -sc) universe" >> /etc/apt/sources.list</code>. Ideally you’ll create your own private <a href="https://wiki.debian.org/DebianRepository/Setup">debian repository</a>.</p>
<p>If you want to configure scripts that are automatically copied over to a new user’s home directory, you can leverage <a href="http://www.linfo.org/etc_skel.html">/etc/skel</a> here to configure them. When producing your final application images, you should be running them with a new restricted user. Configuring the scripts in <code class="language-plaintext highlighter-rouge">/etc/skel</code> will let this automatically happen for any created users.</p>
<h3 id="etcskel"><code class="language-plaintext highlighter-rouge">/etc/skel</code></h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Modify .bash_logout, .bashrc, and .profile in /etc/skel</span>
</code></pre></div></div>
<p>When using the cdimage release, <code class="language-plaintext highlighter-rouge">sudo</code> is not installed, but we’ll need it to add the user to the <code class="language-plaintext highlighter-rouge">sudo</code> group for elevation of privileges. This can be skipped if you want to ensure the user isn’t allowed to elevate.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#cdimage-release-only:</span>
apt update
apt <span class="nb">install sudo</span> <span class="nt">-y</span>
</code></pre></div></div>
<p>This step is required unlike the <a href="#nvidia-Sample">NVIDIA Sample</a> root filesystem. The sample creates the users through a user interface post flashing, and we need to have this set up beforehand.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># give root a login</span>
passwd root
<span class="c"># Creates a user with a home directory (--create-home) and default bash shell</span>
useradd <span class="nt">-m</span> nvuser <span class="nt">-s</span> /bin/bash
<span class="c"># supply a password</span>
passwd nvuser
<span class="c"># add the user to the sudoer group so that they can request elevation</span>
usermod <span class="nt">-aG</span> <span class="nb">sudo </span>nvuser
</code></pre></div></div>
<h2 id="custom-applications">Custom Applications</h2>
<p>At this point we have a usable system and can <a href="#wrapping-it-up">wrap up</a> work. We can also, however, add additional applications/runtimes such as Azure IoT Edge.</p>
<h3 id="azure-iot-edge">Azure IoT Edge</h3>
<p>Install the dependencies:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt update
apt <span class="nb">install </span>ca-certificates gpg curl <span class="nt">-y</span> <span class="nt">--no-install-recommends</span>
<span class="c">#cdimage-release-only:</span>
<span class="c"># needed for modprobe by moby-engine</span>
apt <span class="nb">install </span>kmod
</code></pre></div></div>
<p>Install the apt debian archive:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list <span class="o">></span> ./microsoft-prod.list
<span class="nb">mv</span> ./microsoft-prod.list /etc/apt/sources.list.d/
curl https://packages.microsoft.com/keys/microsoft.asc | gpg <span class="nt">--dearmor</span> <span class="o">></span> microsoft.gpg
<span class="nb">mv</span> ./microsoft.gpg /etc/apt/trusted.gpg.d/
</code></pre></div></div>
<p>Install the engine, cli, and daemon:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt update
apt <span class="nb">install </span>moby-engine moby-cli <span class="nt">-y</span>
<span class="c"># This must be done separately. The IoT Edge daemon needs a container runtime</span>
<span class="c"># installed first, it doesn't care what really, but it is a pre-req that can't </span>
<span class="c"># be described as a deb dependency</span>
apt <span class="nb">install </span>iotedge <span class="nt">-y</span>
</code></pre></div></div>
<p>We’ll postpone configuration of the device until post-flash for now. Automation of credentials and other device specific work requires additional workflows which are beyond the rootfs configuration.</p>
<p>Configure Moby:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#cdimage-release-only:</span>
apt update
<span class="c"># You can install any editor you're comfortable with. The vim.tiny package</span>
<span class="c"># is included when using debootstrap.</span>
apt <span class="nb">install </span>vim.tiny <span class="nt">-y</span>
</code></pre></div></div>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#all</span>
<span class="c"># Let us use docker/moby without sudo</span>
usermod <span class="nt">-aG</span> docker nvuser
<span class="c"># Configure the docker daemon for logging reasonable defaults</span>
<span class="nb">mkdir</span> /etc/docker
vim.tiny /etc/docker/daemon.json
</code></pre></div></div>
<p>Enter your container logging settings. This can also be done on a module by module basis through the deployment manifest <code class="language-plaintext highlighter-rouge">createOptions</code>.</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nl">"log-driver"</span><span class="p">:</span><span class="w"> </span><span class="s2">"json-file"</span><span class="p">,</span><span class="w">
</span><span class="nl">"log-opts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"max-size"</span><span class="p">:</span><span class="w"> </span><span class="s2">"10m"</span><span class="p">,</span><span class="w">
</span><span class="nl">"max-file"</span><span class="p">:</span><span class="w"> </span><span class="s2">"3"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<h3 id="installing-ssh">Installing SSH</h3>
<ol>
<li>Install it in the root image here</li>
<li>Install it after the fact</li>
<li>Skip SSH and use a USB to Serial Console to talk to the device (story for another day)</li>
</ol>
<p>Either way, the steps are simple:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>apt <span class="nb">install </span>openssh-server <span class="nt">-y</span> <span class="nt">--no-install-recommends</span>
<span class="nv">$ </span>vim.tiny /etc/ssh/sshd_config
<span class="c"># Find</span>
<span class="c">#PasswordAuthentication yes</span>
<span class="c"># uncomment it, save, exit</span>
</code></pre></div></div>
<p>(If installing after the fact: <code class="language-plaintext highlighter-rouge">sudo /etc/init.d/ssh restart</code>)</p>
<h2 id="wrapping-it-up">Wrapping It Up</h2>
<p>Once <code class="language-plaintext highlighter-rouge">rootfs</code> customization is complete, <code class="language-plaintext highlighter-rouge">exit</code> the <code class="language-plaintext highlighter-rouge">chroot</code>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">exit</span>
</code></pre></div></div>
<p>We now need to restore the <code class="language-plaintext highlighter-rouge">resolv.conf</code> files, remove temporary files, and <code class="language-plaintext highlighter-rouge">unmount</code> the paths needed for <code class="language-plaintext highlighter-rouge">chroot</code>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>umount ./proc
<span class="nb">sudo </span>umount ./sys
<span class="nb">sudo </span>umount ./dev
<span class="nb">sudo rm </span>usr/bin/qemu-aarch64-static
<span class="c">#cdimage-release-only:</span>
<span class="nb">sudo rm </span>etc/resolv.conf
<span class="nb">sudo mv </span>etc/resolv.conf.saved etc/resolv.conf
</code></pre></div></div>
<p>Remove extra files left around from the mounts and installation of packages.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo rm</span> <span class="nt">-rf</span> var/lib/apt/lists/<span class="k">*</span>
<span class="nb">sudo rm</span> <span class="nt">-rf</span> dev/<span class="k">*</span>
<span class="nb">sudo rm</span> <span class="nt">-rf</span> var/log/<span class="k">*</span>
<span class="nb">sudo rm</span> <span class="nt">-rf</span> var/tmp/<span class="k">*</span>
<span class="nb">sudo rm</span> <span class="nt">-rf</span> var/cache/apt/archives/<span class="k">*</span>.deb
<span class="nb">sudo rm</span> <span class="nt">-rf</span> tmp/<span class="k">*</span>
</code></pre></div></div>
<p>Finally, with everything configured, cleaned up, and ready, we can create the archive.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo tar</span> <span class="nt">-jcpf</span> ../ubuntu_bionic_iot-edge_aarch64.tbz2 <span class="nb">.</span>
</code></pre></div></div>
<h1 id="flashing-a-device">Flashing a Device</h1>
<p>Note: For the UI commands it is assumed that the <a href="https://github.com/idavis/jetson-containers">jetson-containers</a> repository is open in VS Code.</p>
<h2 id="creating-the-fs_dependencies_image">Creating the <code class="language-plaintext highlighter-rouge">FS_DEPENDENCIES_IMAGE</code></h2>
<p>Once you’ve archived the <code class="language-plaintext highlighter-rouge">rootfs</code>, we need to create the <code class="language-plaintext highlighter-rouge">ROOT_FS_ARCHIVE</code> in your <code class="language-plaintext highlighter-rouge">.env</code> to the location of your archive, for example: <code class="language-plaintext highlighter-rouge">ROOT_FS_ARCHIVE=/home/<user>/dev/archives/ubuntu_bionic_iot-edge_aarch64.tbz2</code>. Be careful in that this folder is used as the build context and it all will be loaded into the build (so don’t use <code class="language-plaintext highlighter-rouge">/tmp</code>).</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code>, select <code class="language-plaintext highlighter-rouge">make <rootfs from file></code>, enter the name of the final container image you’d like, such as <code class="language-plaintext highlighter-rouge">ubuntu_bionic_iot-edge_aarch64</code>, press <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make from-file-rootfs-ubuntu_bionic_iot-edge_aarch64
</code></pre></div></div>
<p>Once the build is complete you should see:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">-f</span> <span class="s2">"rootfs-from-file.Dockerfile"</span> <span class="nt">-t</span> <span class="s2">"l4t:ubuntu_bionic_iot-edge_aarch64-rootfs"</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_FS</span><span class="o">=</span>ubuntu_bionic_iot-edge_aarch64.tbz2 <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_FS_SHA</span><span class="o">=</span>8c0a025618fcedabed62e41c238ba49a0c34cf5e <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">VERSION_ID</span><span class="o">=</span>bionic-20190307 <span class="se">\</span>
<span class="nb">.</span>
<span class="c"># ...</span>
Successfully tagged l4t:ubuntu_bionic_iot-edge_aarch64-rootfs
</code></pre></div></div>
<p>In addition, there will be a new file named after your build <code class="language-plaintext highlighter-rouge">flash/rootfs/ubuntu_bionic_iot-edge_aarch64.tbz2.conf</code> which contains the environmental information you need to use it in the <code class="language-plaintext highlighter-rouge">.env</code> during a build:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">ROOT_FS</span><span class="o">=</span>ubuntu_bionic_iot-edge_aarch64.tbz2
<span class="nv">ROOT_FS_SHA</span><span class="o">=</span>8c0a025618fcedabed62e41c238ba49a0c34cf5e
<span class="nv">FS_DEPENDENCIES_IMAGE</span><span class="o">=</span>l4t:ubuntu_bionic_iot-edge_aarch64-rootfs
</code></pre></div></div>
<h2 id="configuring-the-build">Configuring the Build</h2>
<p>Open your <code class="language-plaintext highlighter-rouge">.env</code> file and copy the contents of the <code class="language-plaintext highlighter-rouge">.conf</code> file created above. The <code class="language-plaintext highlighter-rouge">FS_DEPENDENCIES_IMAGE</code> overrides the default file system image used when building the flashing container. <code class="language-plaintext highlighter-rouge">ROOT_FS</code> tells the build which file to pull from the image and it will be checked against the <code class="language-plaintext highlighter-rouge">ROOT_FS_SHA</code>.</p>
<p>With this set, we are ready to build the flashing container.</p>
<h2 id="building-the-flashing-image">Building the Flashing Image</h2>
<p>Note: We’re going to start off with nano (nano-dev) but you can also run Xavier/TX2 builds here (just substitute the text nano-dev for jax/tx2).</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code> which will drop down a build task list. Select <code class="language-plaintext highlighter-rouge">make <imaging options></code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>, select <code class="language-plaintext highlighter-rouge">32.2-nano-dev-jetpack-4.2.1</code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make image-32.2-nano-dev-jetpack-4.2.1
</code></pre></div></div>
<p>Which will run the build with our new root filesystem in place:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">--squash</span> <span class="nt">-f</span> /home/<user>/jetson-containers/flash/l4t/32.2/default.Dockerfile <span class="nt">-t</span> l4t:32.2-nano-dev-jetpack-4.2.1-image <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">DEPENDENCIES_IMAGE</span><span class="o">=</span>l4t:32.2-nano-dev-jetpack-4.2.1-deps <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">DRIVER_PACK</span><span class="o">=</span>Jetson-210_Linux_R32.2.0_aarch64.tbz2 <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">DRIVER_PACK_SHA</span><span class="o">=</span>2d60f126a3ecf55269c486b4b0ca684448f2ca7d <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">FS_DEPENDENCIES_IMAGE</span><span class="o">=</span>l4t:ubuntu-bionic-iot-edge-aarch64-rootfs <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_FS</span><span class="o">=</span>ubuntu_bionic_iot-edge_aarch64.tbz2 <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_FS_SHA</span><span class="o">=</span>8c0a025618fcedabed62e41c238ba49a0c34cf5e <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">BSP_DEPENDENCIES_IMAGE</span><span class="o">=</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">BSP</span><span class="o">=</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">BSP_SHA</span><span class="o">=</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">TARGET_BOARD</span><span class="o">=</span>jetson-nano-qspi-sd <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">ROOT_DEVICE</span><span class="o">=</span>mmcblk0p1 <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">VERSION_ID</span><span class="o">=</span>bionic-20190307 <span class="se">\</span>
<span class="nb">.</span>
<span class="c">#...</span>
Successfully tagged l4t:32.2-nano-dev-jetpack-4.2.1-image
</code></pre></div></div>
<p>We can see the built image which is nice and small compared to the default:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>docker images
REPOSITORY TAG SIZE
l4t 32.2-nano-dev-jetpack-4.2.1-image 1.32GB
</code></pre></div></div>
<h2 id="flashing-the-device">Flashing the Device</h2>
<p>Set your jumpers for flashing, cycle the power or reboot the device. Ensure that it shows up when you run <code class="language-plaintext highlighter-rouge">lsusb</code> (there will be a device with <code class="language-plaintext highlighter-rouge">Nvidia Corp</code> in the line):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>lsusb
<span class="c">#...</span>
Bus 001 Device 069: ID 0955:7020 NVidia Corp.
<span class="c">#...</span>
</code></pre></div></div>
<p>Now that the device is ready, we can flash it (we’re assuming production module size of <code class="language-plaintext highlighter-rouge">16GB/14GiB</code> and not overriding the rootfs size):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>./flash/flash.sh l4t:32.2-nano-dev-jetpack-4.2.1-image
</code></pre></div></div>
<p>The device should reboot automatically once flashed. If you remember your passwords from above, you should now be able to log in and use the device.</p>
<h2 id="configuring-iot-edge">Configuring IoT Edge</h2>
<p>SSH into the device (or type this in manually :) )</p>
<p>The NVIDIA BSP overwrites the <code class="language-plaintext highlighter-rouge">/etc/hosts</code> and <code class="language-plaintext highlighter-rouge">/etc/hostname</code> when it is applied (via <code class="language-plaintext highlighter-rouge">config.tbz2</code>). The device’s default name is <code class="language-plaintext highlighter-rouge">tegra-ubuntu</code>. Let’s change this and give our little alien IoT device a name, <code class="language-plaintext highlighter-rouge">nano-nano</code> (or any other name you wish). You’ll eventually set this through further automation when flashing the device.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>hostnamectl set-hostname nano-nano
</code></pre></div></div>
<p>Nex, open your <code class="language-plaintext highlighter-rouge">/etc/hosts</code> and replace <code class="language-plaintext highlighter-rouge">tegra-ubuntu</code> with your new host name.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>vim.tiny /etc/hosts
</code></pre></div></div>
<p>Once completed, your <code class="language-plaintext highlighter-rouge">/etc/hosts</code> file should look like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>127.0.0.1 localhost
127.0.1.1 nano-nano
</code></pre></div></div>
<p>Next, we need to update the hostname configured for IoT Edge:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">sudo </span>vim.tiny /etc/iotedge/config.yaml
<span class="c"># set the connection string</span>
<span class="c"># set the hostname to match the device</span>
<span class="nv">$ </span><span class="nb">sudo</span> /etc/init.d/iotedge restart
</code></pre></div></div>
<p>Your device should now be available by name and automatically download its configured deployment after restarting the service.</p>
<p>You can also <a href="/2019/07/pushing-images-to-devices">push</a> the <a href="/2019/07/jetson-containers-samples">samples</a> to the device to test your new base image.</p>Jetson Containers - Building for CTI Devices2019-08-02T12:00:00+00:002019-08-02T12:00:00+00:00http://codepyre.com/2019/08/building-for-cti-devices<h1 id="introduction">Introduction</h1>
<p>If you haven’t walked through the <a href="/2019/07/jetson-containers-introduction">first post</a> covering an introduction to Jetson containers, I’d recommend looking at it first.</p>
<p>Working with the NVDIDIA development kits is a great way to get started building on the Jetson platform. When you try to build a real device, it gets much more complicated. This post covers a thin slice of that building a set of container images which will hold our root file system and everything needed to set up our device. Then we’ll use that image to flash that device.</p>
<p>Both UI and Terminal options are listed for each step. For the UI commands it is assumed that the <a href="https://github.com/idavis/jetson-containers">jetson-containers</a> repository is open in VS Code.</p>
<h1 id="background">Background</h1>
<h2 id="carrier-boards">Carrier Boards</h2>
<p>The Jetson devices (Xavier, Nano, TX1, TX2, etc) are embedded system-on-modules (SoMs) requiring carrier boards to provide input/output, peripheral connectors, and power. NVIDIA publishes specs for their development kits (which are carrier board + SoM) and guidance for manufacturers that want to create custom carrier boards. Each manufacturer is allowed to customize board capabilities for specific purposes. As manufactures design these boards, they need a way to tell the module what they’ve done. This is achieved through board support packages (BSPs).</p>
<p>NVIDIA provides default <a href="https://en.wikipedia.org/wiki/Board_support_package">BSPs</a> for their development kits which contain the drivers, kernel, kernel headers, <a href="https://en.wikipedia.org/wiki/Device_tree">device trees</a>, flashing utilities, bootloader, operating system (OS) configuration files, and scripts.</p>
<h2 id="the-flashing-process">The Flashing Process</h2>
<p>Carrier board manufacturers build on top of NVIDIA’s BSP adding in their own drivers, <a href="https://en.wikipedia.org/wiki/Device_tree">device trees</a>, and other customizations. They generally follow a pattern:</p>
<ol>
<li>Extract NVIDIA BSP to folder (root is <code class="language-plaintext highlighter-rouge">Linux_For_Tegra</code>)</li>
<li>Layer carrier board BSP on top of NVIDIA BSP</li>
<li>Extract the desired root file system into the <code class="language-plaintext highlighter-rouge">Linux_For_Tegra/rootfs</code> folder</li>
<li>Run an installation/configuration script
<ul>
<li>This will call <code class="language-plaintext highlighter-rouge">Linux_For_Tegra/apply_binaries.sh</code> which configures the rootfs folder with the BSP</li>
</ul>
</li>
<li>Flash the device with the configured root file system</li>
</ol>
<p>There are other possibilities such as creating raw or sparse images which can be flashed with tools such as <a href="https://www.balena.io/etcher/">Etcher</a>.</p>
<h1 id="getting-started">Getting Started</h1>
<p>One manufacturer of NVIDIA carrier boards is Connect Tech Inc (CTI). They have a variety of carrier boards for TX1, TX2, TX2i, and Xavier. We can create a simple container image which can be used to flash the device repeatedly setting up the base of a provisioning process.</p>
<p>To set this up we need to:</p>
<ol>
<li><a href="#bsp-dependency-image">Set up a BSP dependency image</a></li>
<li><a href="#jetpack-dependency-image">Set up the JetPack dependency image</a></li>
<li><a href="#building-the-cti-flashing-image">Build the CTI flashing image</a></li>
<li><a href="#flashing-the-device">Flash the device</a></li>
</ol>
<h3 id="tldr">TLDR</h3>
<p>If you just want the commands to flash an Orbitty device with the v125 BSP:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers/<span class="nv">$ </span>make cti-32.1-tx2-bsp-v125-deps
~/jetson-containers/<span class="nv">$ </span>make 32.1-tx2-jetpack-4.2-deps
~/jetson-containers/<span class="nv">$ </span>make image-cti-32.1-tx2-bsp-v125-orbitty
~/jetson-containers/<span class="nv">$ </span>./flash/flash.sh l4t:cti-32.1-tx2-bsp-v125-orbitty-image
</code></pre></div></div>
<p>No using an Orbitty? Replace <code class="language-plaintext highlighter-rouge">orbitty</code> with your device and module, and chance <code class="language-plaintext highlighter-rouge">125</code> the the appropriate BSP version:</p>
<ul>
<li>Xavier
<ul>
<li>rogue, rogue-imx274-2cam</li>
<li>mimic-base</li>
</ul>
</li>
<li>TX2/TX2i
<ul>
<li>astro-mpcie, astro-mpcie-tx2i, astro-usb3, astro-usb3-tx2i, astro-revG+, astro-revG+-tx2i</li>
<li>elroy-mpcie, elroy-mpcie-tx2i, elroy-usb3, elroy-usb3-tx2i, elroy-revF+, elroy-refF+-tx2i</li>
<li>orbitty, orbitty-tx2i</li>
<li>rosie, rosie-tx2i</li>
<li>rudi-mpcie, rudi-mpcie-tx2i, rudi-usb3, rudi-usb3-tx2i, rudi rudi-tx2i</li>
<li>sprocket</li>
<li>spacely-base, spacely-base-tx2i, spacely-imx274-6cam, spacely-imx274-6cam-tx2i, spacely-imx274-3cam, spacely-imx274-3cam-tx2i</li>
<li>cogswell cogswell-tx2i</li>
<li>vpg003-base vpg003-base-tx2i</li>
</ul>
</li>
</ul>
<h2 id="bsp-dependency-image">BSP Dependency Image</h2>
<p>CTI publishes BSPs that tend to align on the module and driver pack level.</p>
<p>To create the BSP dependency image we can take two paths (first is a lot simpler):</p>
<ol>
<li>
<p>Let the <code class="language-plaintext highlighter-rouge">jetson-containers</code> tooling download the BSP and build the dependency image</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code>, select <code class="language-plaintext highlighter-rouge">make <CTI dependencies></code>, select <code class="language-plaintext highlighter-rouge">cti-32.1-tx2-bsp-v125-deps</code>, press <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> ~/jetson-containers/<span class="nv">$ </span>make cti-32.1-tx2-bsp-v125-deps
</code></pre></div> </div>
<p>This will download the v125 BSP and put it into a container image <code class="language-plaintext highlighter-rouge">l4t:cti-32.1-tx2-bsp-v125-deps</code></p>
</li>
<li>
<p>Bundle it with the JetPack dependencies image</p>
<ul>
<li>Manually download the JetPack binaries</li>
<li>Manually download the BSP and put it in the same folder as the JetPack binaries</li>
<li>Set the <code class="language-plaintext highlighter-rouge">SDKM_DOWNLOADS</code> in the <code class="language-plaintext highlighter-rouge">.env</code> file to point to your JetPack folder</li>
<li>run
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> ~/jetson-containers$ make from-deps-folder-32.1-tx2-jetpack-4.2
</code></pre></div> </div>
<p>This will build <code class="language-plaintext highlighter-rouge">l4t:32.1-tx2-jetpack-4.2-deps</code> with the BSP in the same folder</p>
</li>
<li>Set the <code class="language-plaintext highlighter-rouge">BSP_DEPENDENCIES_IMAGE</code> in the <code class="language-plaintext highlighter-rouge">.env</code> to this newly created image <code class="language-plaintext highlighter-rouge">l4t:32.1-tx2-jetpack-4.2-deps</code>. By default it uses <code class="language-plaintext highlighter-rouge">l4t:cti-32.1-tx2-bsp-v125-deps</code> from the first method above.</li>
</ul>
</li>
</ol>
<h2 id="jetpack-dependency-image">JetPack Dependency Image</h2>
<p>Following the <a href="jetson-containers-introduction#automated">automated</a> example, enter your NVIDIA developer/partner email address into the <code class="language-plaintext highlighter-rouge">.env</code> file using the <code class="language-plaintext highlighter-rouge">NV_USER</code> setting. If using an nvidia partner account, also set <code class="language-plaintext highlighter-rouge">NV_LOGIN_TYPE=nvonline</code> as the default is <code class="language-plaintext highlighter-rouge">NV_LOGIN_TYPE=devzone</code>.</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code> which will drop down a build task list. Select <code class="language-plaintext highlighter-rouge">make <jetpack dependencies></code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>, select <code class="language-plaintext highlighter-rouge">32.1-tx2-jetpack-4.2</code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make deps-32.1-tx2-jetpack-4.2
</code></pre></div></div>
<p>Once completed you’ll have a JetPack 4.2 dependencies image ready for the next step: <code class="language-plaintext highlighter-rouge">l4t:32.1-tx2-jetpack-4.2-deps</code>.</p>
<h2 id="building-the-cti-flashing-image">Building the CTI flashing image</h2>
<p>Here we’re going to break down the CTI <code class="language-plaintext highlighter-rouge">Dockerfile</code> used to build the flashing image. All of this work is already done in the repository, but this will give details on what is going on underneath.</p>
<h3 id="how-it-works">How it Works</h3>
<p>We’re laying in the root filesystem and BSP, so we import them as named pieces in <a href="https://docs.docker.com/develop/develop-images/multistage-build">multi-stage builds</a>:</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">ARG</span><span class="s"> VERSION_ID</span>
<span class="k">ARG</span><span class="s"> DEPENDENCIES_IMAGE</span>
<span class="k">ARG</span><span class="s"> BSP_DEPENDENCIES_IMAGE</span>
<span class="k">FROM</span><span class="s"> ${DEPENDENCIES_IMAGE} as dependencies</span>
<span class="k">ARG</span><span class="s"> VERSION_ID</span>
<span class="k">ARG</span><span class="s"> BSP_DEPENDENCIES_IMAGE</span>
<span class="k">FROM</span><span class="s"> ${BSP_DEPENDENCIES_IMAGE} as bsp-dependencies</span>
</code></pre></div></div>
<p>Normally we layer in <code class="language-plaintext highlighter-rouge">qemu</code> here to allow us to build these images on <code class="language-plaintext highlighter-rouge">x86_64</code> hosts, but the flashing images must always be built on the <code class="language-plaintext highlighter-rouge">x86_64</code> host. Here we do it so that we can chroot and run other tools in the root filesystem for custom configuration.</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">ARG</span><span class="s"> VERSION_ID</span>
<span class="k">FROM</span><span class="s"> ubuntu:${VERSION_ID} as qemu</span>
<span class="c"># install qemu for the support of building containers on host</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="nt">--no-install-recommends</span> qemu-user-static <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
</code></pre></div></div>
<p>The NVIDIA flashing tools depend on perl, python, sudo, and other packages, so we install them first.</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># start of the real image base</span>
<span class="k">ARG</span><span class="s"> VERSION_ID</span>
<span class="k">FROM</span><span class="s"> ubuntu:${VERSION_ID}</span>
<span class="k">COPY</span><span class="s"> --from=qemu /usr/bin/qemu-aarch64-static /usr/bin/qemu-aarch64-static</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\
</span> apt-utils <span class="se">\
</span> bzip2 <span class="se">\
</span> curl <span class="se">\
</span> lbzip2 <span class="se">\
</span> libpython-stdlib <span class="se">\
</span> perl <span class="se">\
</span> python <span class="se">\
</span> python-minimal <span class="se">\
</span> python2.7 <span class="se">\
</span> python2.7-minimal <span class="se">\
</span> <span class="nb">sudo</span> <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
</code></pre></div></div>
<p>Now we take the driver pack (NVIDIA BSP) and extract it into the container an clean up after. Then we extract the root filesystem from the JetPack dependencies folder into the BSP’s <code class="language-plaintext highlighter-rouge">rootfs</code> folder. We can substitute our own root filesystem here to fully customize the device, but that will be covered in a different post.</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">ARG</span><span class="s"> DRIVER_PACK</span>
<span class="k">ARG</span><span class="s"> DRIVER_PACK_SHA</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/${DRIVER_PACK} ${DRIVER_PACK}</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"</span><span class="k">${</span><span class="nv">DRIVER_PACK_SHA</span><span class="k">}</span><span class="s2"> *./</span><span class="k">${</span><span class="nv">DRIVER_PACK</span><span class="k">}</span><span class="s2">"</span> | <span class="nb">sha1sum</span> <span class="nt">-c</span> <span class="nt">--strict</span> - <span class="o">&&</span> <span class="se">\
</span> <span class="nb">tar</span> <span class="nt">-xp</span> <span class="nt">--overwrite</span> <span class="nt">-f</span> ./<span class="k">${</span><span class="nv">DRIVER_PACK</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> /<span class="k">${</span><span class="nv">DRIVER_PACK</span><span class="k">}</span>
<span class="k">ARG</span><span class="s"> ROOT_FS</span>
<span class="k">ARG</span><span class="s"> ROOT_FS_SHA</span>
<span class="k">COPY</span><span class="s"> --from=dependencies /data/${ROOT_FS} ${ROOT_FS}</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"</span><span class="k">${</span><span class="nv">ROOT_FS_SHA</span><span class="k">}</span><span class="s2"> *./</span><span class="k">${</span><span class="nv">ROOT_FS</span><span class="k">}</span><span class="s2">"</span> | <span class="nb">sha1sum</span> <span class="nt">-c</span> <span class="nt">--strict</span> - <span class="o">&&</span> <span class="se">\
</span> <span class="nb">cd</span> /Linux_for_Tegra/rootfs <span class="o">&&</span> <span class="se">\
</span> <span class="nb">tar</span> <span class="nt">-xp</span> <span class="nt">--overwrite</span> <span class="nt">-f</span> /<span class="k">${</span><span class="nv">ROOT_FS</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> /<span class="k">${</span><span class="nv">ROOT_FS</span><span class="k">}</span>
<span class="k">WORKDIR</span><span class="s"> /Linux_for_Tegra</span>
</code></pre></div></div>
<p>Now we can load in our CTI BSP and let it configure the <code class="language-plaintext highlighter-rouge">rootfs</code> folder. We must use <code class="language-plaintext highlighter-rouge">sudo</code> here despite being <code class="language-plaintext highlighter-rouge">root</code> as the NVIDIA scripts check for and require <code class="language-plaintext highlighter-rouge">sudo</code>.</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">ARG</span><span class="s"> BSP</span>
<span class="k">ARG</span><span class="s"> BSP_SHA</span>
<span class="c"># apply_binaries is handled in the install.sh</span>
<span class="k">COPY</span><span class="s"> --from=bsp-dependencies /data/${BSP} ${BSP}</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"</span><span class="k">${</span><span class="nv">BSP_SHA</span><span class="k">}</span><span class="s2"> *./</span><span class="k">${</span><span class="nv">BSP</span><span class="k">}</span><span class="s2">"</span> | <span class="nb">sha1sum</span> <span class="nt">-c</span> <span class="nt">--strict</span> - <span class="o">&&</span> <span class="se">\
</span> <span class="nb">tar</span> <span class="nt">-xzf</span> <span class="k">${</span><span class="nv">BSP</span><span class="k">}</span> <span class="o">&&</span> <span class="se">\
</span> <span class="nb">cd</span> ./CTI-L4T <span class="o">&&</span> <span class="se">\
</span> <span class="nb">sudo</span> ./install.sh
</code></pre></div></div>
<p>At this point the root filesystem is fully configured. We now generate a helper script which sets up the board and <code class="language-plaintext highlighter-rouge">rootfs</code> target device while passing all command line arguments to the underlying <code class="language-plaintext highlighter-rouge">flash.sh</code>.</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">WORKDIR</span><span class="s"> /Linux_for_Tegra</span>
<span class="k">ARG</span><span class="s"> TARGET_BOARD</span>
<span class="k">ARG</span><span class="s"> ROOT_DEVICE</span>
<span class="k">ENV</span><span class="s"> TARGET_BOARD=$TARGET_BOARD</span>
<span class="k">ENV</span><span class="s"> ROOT_DEVICE=$ROOT_DEVICE</span>
<span class="k">RUN </span><span class="nb">echo</span> <span class="s2">"#!/bin/bash"</span> <span class="o">>></span> entrypoint.sh <span class="o">&&</span> <span class="se">\
</span> <span class="nb">echo</span> <span class="s2">"echo </span><span class="se">\"</span><span class="s2">sudo ./flash.sh </span><span class="se">\$</span><span class="s2">* </span><span class="k">${</span><span class="nv">TARGET_BOARD</span><span class="k">}</span><span class="s2"> </span><span class="k">${</span><span class="nv">ROOT_DEVICE</span><span class="k">}</span><span class="se">\"</span><span class="s2">"</span> <span class="o">>></span> entrypoint.sh <span class="o">&&</span> <span class="se">\
</span> <span class="nb">echo</span> <span class="s2">"sudo ./flash.sh </span><span class="se">\$</span><span class="s2">* </span><span class="k">${</span><span class="nv">TARGET_BOARD</span><span class="k">}</span><span class="s2"> </span><span class="k">${</span><span class="nv">ROOT_DEVICE</span><span class="k">}</span><span class="s2">"</span> <span class="o">>></span> entrypoint.sh <span class="o">&&</span> <span class="se">\
</span> <span class="nb">chmod</span> +x entrypoint.sh
<span class="k">ENTRYPOINT</span><span class="s"> ["sh", "-c", "sudo ./entrypoint.sh $*", "--"]</span>
</code></pre></div></div>
<h3 id="how-to-use-it">How to Use It</h3>
<p>UI:</p>
<p>Use <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code> which will drop down a build task list. Select <code class="language-plaintext highlighter-rouge">make <CTI imaging options></code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>, select <code class="language-plaintext highlighter-rouge">cti-32.1-tx2-bsp-v125-orbitty-image</code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make image-cti-32.1-tx2-bsp-v125-orbitty
</code></pre></div></div>
<p>Once built, you’ll have the container image <code class="language-plaintext highlighter-rouge">l4t:cti-32.1-tx2-bsp-v125-orbitty-image</code> ready to flash your device.</p>
<h2 id="flashing-the-device">Flashing the Device</h2>
<p>Put the device into recovery mode and run:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers/<span class="nv">$ </span>./flash/flash.sh l4t:cti-32.1-tx2-bsp-v125-orbitty-image
</code></pre></div></div>
<p>The device will reboot and await your configuration.</p>
<p>At this point your device is flashed, drivers installed, and ready to use; however, none of the NVIDIA JetPack libraries are installed. A container runtime (docker) is installed, and that’s all we need.</p>
<h1 id="next-steps">Next Steps</h1>
<p>With a flashed device and no JetPack, we start to leverage the full containerization of the JetPack platform.</p>
<p>For detailed guidance and walk through, refer to <a href="/2019/07/jetson-containers-introduction#building-the-containers">Building the Containers</a>.</p>
<h2 id="driver-pack">Driver Pack</h2>
<p>We’re going to make the driver pack now which has a small root filesystem with the NVIDIA L4T driver pack applied. This image sets up the libraries needed for everything else to run. We’re using a much smaller root filesystem (~<code class="language-plaintext highlighter-rouge">60MB</code>).</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code>, select <code class="language-plaintext highlighter-rouge">make <driver packs></code>, select <code class="language-plaintext highlighter-rouge">l4t-32.1-tx2</code>, press <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make l4t-32.1-tx2
</code></pre></div></div>
<p>Once built, you should see <code class="language-plaintext highlighter-rouge">Successfully tagged l4t:32.1-tx2</code></p>
<h2 id="jetpack">JetPack</h2>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code>, select <code class="language-plaintext highlighter-rouge">make <jetpack></code>, select <code class="language-plaintext highlighter-rouge">32.1-tx2-jetpack-4.2</code>, press <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make 32.1-tx2-jetpack-4.2
</code></pre></div></div>
<p>Once built, you should see <code class="language-plaintext highlighter-rouge">Successfully tagged l4t:32.1-tx2-jetpack-4.2</code></p>
<p>Let’s take a look:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>docker images
REPOSITORY TAG SIZE
l4t 32.1-tx2-jetpack-4.2-devel 5.67GB
l4t 32.1-tx2-jetpack-4.2-runtime 1.21GB
l4t 32.1-tx2-jetpack-4.2-base 493MB
l4t 32.1-tx2 483MB
l4t tx2-jetpack-4.2-deps 3.32GB
arm64v8/ubuntu bionic-20190307 80.4MB
</code></pre></div></div>
<h2 id="getting-the-images-onto-the-device">Getting the Images Onto the Device</h2>
<p>Push these images to your container registry so that they can be used by your CI pipeline and deployments, or see the <a href="/2019/07/pushing-images-to-devices">pushing images to devices</a> post for a shortcut.</p>
<p>There is nothing CTI specific about what we run on the device. All of the device specific work was handled in the <code class="language-plaintext highlighter-rouge">rootfs</code> flashing leaving us free to build generic containerized workloads on the device.</p>Jetson Containers - Building DeepStream Images2019-07-31T12:00:00+00:002019-07-31T12:00:00+00:00http://codepyre.com/2019/07/building-deepstream-images<h1 id="introduction">Introduction</h1>
<p>If you haven’t walked through the <a href="/2019/07/jetson-containers-introduction">first post</a> covering an introduction to Jetson containers, I’d recommend looking at it first.</p>
<p>Note: We’re going to start off with Xavier (jax) but you can also run Nano/TX2 builds here (just substitute the text jax for nano-dev/tx2). Both UI and Terminal options are listed for each step. For the UI commands it is assumed that the <a href="https://github.com/idavis/jetson-containers">jetson-containers</a> repository is open in VS Code.</p>
<h1 id="three-paths">Three Paths</h1>
<h2 id="nvidia-container-runtime">NVIDIA Container Runtime</h2>
<p>Starting in JetPack 4.2.1, NVIDIA has begun releasing nvidia-docker container runtime for the Jetson platform. With this they have released the <a href="nvcr.io/nvidia/l4t-base">l4t-base container image</a> as well as a <a href="nvcr.io/nvidia/deepstream-l4t">DeepStream-l4t</a> image.</p>
<p>These container images require the <a href="https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-Container-Runtime-on-Jetson">NVIDIA Container Runtime on Jetson</a>. The runtime mounts platform specific libraries and device nodes into the DeepStream container from the underlying host, thereby bypassing the entire reason to run your application in a container. This allows them to use containers that look small, because they are mounting the host file system into the container.</p>
<p>The device nodes must be done, but the container runtime requires essentially the entire JetPack 4.2.1 SDK to be installed on the host OS. This makes managing deployed devices that much harder as patches must be applied to the host and not a simple image layer update.</p>
<p>Additionally, NVIDIA has essentially forked runc and any Common Vulnerabilities and Exposures (CVE) will have an additional delay waiting for NVIDIA to apply fixes and deploy updates (assuming they do patch). The runtime also forces you to use very specific versions of Docker.</p>
<h2 id="quick-path">Quick Path</h2>
<p>Note: These next parts with the DeepStream <code class="language-plaintext highlighter-rouge">.deb</code> package <a href="https://devtalk.nvidia.com/default/topic/1057580/jetson-nano/jetpack-4-2-1-l4t-r32-2-release-for-jetson-nano-jetson-tx1-tx2-and-jetson-agx-xavier/post/5367091/#5367091">should be temporary</a>. NVIDIA just added DeepStream to JetPack 4.2.1 but we can’t automate the download yet for a dependencies image. Once this is done, the DeepStream 4.0 SDK will be available as part of the JetPack 4.2.1 <code class="language-plaintext highlighter-rouge">devel</code> images.</p>
<p>Go to the <a href="https://developer.nvidia.com/deepstream-sdk">DeepStream SDK site</a> or <a href="https://developer.nvidia.com/deepstream-download">DeepStream Downloads</a> page and download the Jetson <code class="language-plaintext highlighter-rouge">.deb</code> file. You can save it to the <code class="language-plaintext highlighter-rouge">jetson-container</code> root directory or set the <code class="language-plaintext highlighter-rouge">DOCKER_CONTEXT</code> in your <code class="language-plaintext highlighter-rouge">.env</code> file to where you saved the file.</p>
<p>Once you completed <a href="/2019/07/maximizing-jetson-nano-storage#create-dependencies-image">creating the dependencies image</a> and <a href="/2019/07/maximizing-jetson-nano-storage#create-the-jetpack-images">creating the JetPack images</a> for the device you wish to target, we can build DeepStream images quickly. We’re going to leverage the <code class="language-plaintext highlighter-rouge">docker/examples/deepstream/Dockerfile</code> to base our image on the device’s <code class="language-plaintext highlighter-rouge">devel</code> image.</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code>, select <code class="language-plaintext highlighter-rouge">make <deepstream 4.0 devel></code>, select <code class="language-plaintext highlighter-rouge">32.2-jax-jetpack-4.2.1</code>, press <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make build-32.2-jax-jetpack-4.2.1-deepstream-4.0-devel
</code></pre></div></div>
<p>Which runs:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">--squash</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">IMAGE_NAME</span><span class="o">=</span>l4t <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">TAG</span><span class="o">=</span>32.2-jax-jetpack-4.2.1 <span class="se">\</span>
<span class="nt">-t</span> l4t:32.2-jax-jetpack-4.2.1-deepstream-4.0-devel <span class="se">\</span>
<span class="nt">-f</span> /home/<user>/jetson-containers/docker/examples/deepstream/Dockerfile <span class="se">\</span>
<span class="nb">.</span> <span class="c"># context path</span>
</code></pre></div></div>
<p>Which builds our DeepStream 4.0 image based on the devices <code class="language-plaintext highlighter-rouge">devel</code> image. This makes the <code class="language-plaintext highlighter-rouge">Dockerfile</code> quite simple:</p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">ARG</span><span class="s"> IMAGE_NAME</span>
<span class="k">ARG</span><span class="s"> TAG</span>
<span class="k">FROM</span><span class="s"> ${IMAGE_NAME}:${TAG}-devel</span>
<span class="k">RUN </span>apt-get update <span class="o">&&</span> <span class="se">\
</span> apt-get <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\
</span> libssl1.0.0 <span class="se">\
</span> libgstreamer1.0-0 <span class="se">\
</span> gstreamer1.0-tools <span class="se">\
</span> gstreamer1.0-plugins-good <span class="se">\
</span> gstreamer1.0-plugins-bad <span class="se">\
</span> gstreamer1.0-plugins-ugly <span class="se">\
</span> gstreamer1.0-libav <span class="se">\
</span> libgstrtspserver-1.0-0 <span class="se">\
</span> libjansson4 <span class="se">\
</span> libjson-glib-1.0-0 <span class="se">\
</span> <span class="o">&&</span> <span class="se">\
</span> apt-get clean <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/lib/apt/lists/<span class="k">*</span>
<span class="k">COPY</span><span class="s"> ./deepstream-4.0_4.0-1_arm64.deb /deepstream-4.0_4.0-1_arm64.deb</span>
<span class="k">RUN </span>dpkg <span class="nt">-i</span> /deepstream-4.0_4.0-1_arm64.deb <span class="o">&&</span> <span class="se">\
</span> <span class="nb">rm</span> /deepstream-4.0_4.0-1_arm64.deb
<span class="k">RUN </span><span class="nb">export </span><span class="nv">LD_LIBRARY_PATH</span><span class="o">=</span>/usr/lib/aarch64-linux-gnu/tegra:<span class="nv">$LD_LIBRARY_PATH</span>
<span class="k">RUN </span>ldconfig
<span class="k">WORKDIR</span><span class="s"> /opt/nvidia/deepstream/deepstream-4.0/samples</span>
</code></pre></div></div>
<p>Note: The <code class="language-plaintext highlighter-rouge">libjson-glib-1.0-0</code> dependency isn’t listed in the docs, but is required. They assume it was installed via other means.</p>
<p>Once the build is complete we can view the images we’ve built running <code class="language-plaintext highlighter-rouge">docker images</code> (output truncated):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>docker images | <span class="nb">grep </span>32.2-jax-jetpack-4.2.1-
l4t 32.2-jax-jetpack-4.2.1-deepstream-4.0-devel 6.28GB
l4t 32.2-jax-jetpack-4.2.1-devel 5.79GB
l4t 32.2-jax-jetpack-4.2.1-runtime 1.23GB
l4t 32.2-jax-jetpack-4.2.1-base 475MB
l4t 32.2-jax-jetpack-4.2.1-deps 3.48GB
</code></pre></div></div>
<p>We now have a <code class="language-plaintext highlighter-rouge">6.28GB</code> development image which we can use to <a href="#running-the-samples">run the samples</a>.</p>
<h2 id="better-path">Better Path</h2>
<p>The <a href="/2019/07/jetson-containers-introduction">first post</a> covering an introduction to Jetson containers we built out the <code class="language-plaintext highlighter-rouge">base</code>, <code class="language-plaintext highlighter-rouge">runtime</code>, and <code class="language-plaintext highlighter-rouge">devel</code> images. In the <a href="#quick-path">Quick Path</a> above and <a href="/2019/07/jetson-containers/samples">samples</a> post we leveraged the <code class="language-plaintext highlighter-rouge">devel</code> image to quickly build/install applications/sdks. We can start to do better by leveraging the <code class="language-plaintext highlighter-rouge">base</code> image instead.</p>
<p>As the <code class="language-plaintext highlighter-rouge">runtime</code> and <code class="language-plaintext highlighter-rouge">devel</code> images are built on top of <code class="language-plaintext highlighter-rouge">base</code>, they give us a recipe for building out any custom image we wish to build. Looking at the <code class="language-plaintext highlighter-rouge">jetson-containers/docker/examples/deepstream/</code> folder we can see base images for each device:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>32.2-jax-jetpack-4.2.1.Dockerfile
32.2-nano-dev-jetpack-4.2.1.Dockerfile
32.2-nano-jetpack-4.2.1.Dockerfile
32.2-tx1-jetpack-4.2.1.Dockerfile
32.2-tx2-4gb-jetpack-4.2.1.Dockerfile
32.2-tx2i-jetpack-4.2.1.Dockerfile
32.2-tx2-jetpack-4.2.1.Dockerfile
</code></pre></div></div>
<p>Most of these files are copy and paste from the corresponding <code class="language-plaintext highlighter-rouge">devel</code> containers. I’m going to describe, in detail, the differences focusing on the <code class="language-plaintext highlighter-rouge">32.2-jax-jetpack-4.2.1.Dockerfile</code>.</p>
<p>We need to keep the dependencies image, but we’re switching to the <code class="language-plaintext highlighter-rouge">base</code> image. The <code class="language-plaintext highlighter-rouge">${TAG}</code> has been made generic so that this header is the same across all of the DeepStream <code class="language-plaintext highlighter-rouge">Dockerfile</code>s:</p>
<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">@@ -1,17 +1,12 @@</span>
ARG DEPENDENCIES_IMAGE
ARG IMAGE_NAME
<span class="gi">+ARG TAG
</span> FROM ${DEPENDENCIES_IMAGE} as dependencies
ARG IMAGE_NAME
<span class="gi">+ARG TAG
+FROM ${IMAGE_NAME}:${TAG}-base
</span><span class="gd">-FROM ${IMAGE_NAME}:32.2-jax-jetpack-4.2.1-runtime
</span></code></pre></div></div>
<p>We don’t need the full <code class="language-plaintext highlighter-rouge">cuda-toolkit-10-0</code> installed in <code class="language-plaintext highlighter-rouge">devel</code>. TensorRT requires <code class="language-plaintext highlighter-rouge">cuda-cublas-dev-10-0</code> and <code class="language-plaintext highlighter-rouge">cuda-cudart-dev-10-0</code>. DeepStream requires <code class="language-plaintext highlighter-rouge">cuda-npp-dev-10-0</code>.</p>
<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">#</span> CUDA Toolkit for L4T
ARG CUDA_TOOLKIT_PKG="cuda-repo-l4t-10-0-local-${CUDA_PKG_VERSION}_arm64.deb"
COPY --from=dependencies /data/${CUDA_TOOLKIT_PKG} ${CUDA_TOOLKIT_PKG}
dpkg --force-all -i ${CUDA_TOOLKIT_PKG} && \
rm ${CUDA_TOOLKIT_PKG} && \
apt-get update && \
<span class="gi">+ apt-get install -y --allow-downgrades cuda-cublas-dev-10-0 cuda-cudart-dev-10-0 cuda-npp-dev-10-0 && \
</span><span class="gd">- apt-get install -y --allow-downgrades cuda-toolkit-10-0 libgomp1 libfreeimage-dev libopenmpi-dev openmpi-bin && \
</span> dpkg --purge cuda-repo-l4t-10-0-local-10.0.326 \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
</code></pre></div></div>
<p>We don’t need the cuDNN docs, but we still need the <code class="language-plaintext highlighter-rouge">dev</code> package:</p>
<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gd">-COPY --from=dependencies /data/libcudnn7-doc_$CUDNN_VERSION-1+cuda10.0_arm64.deb libcudnn7-doc_$CUDNN_VERSION-1+cuda10.0_arm64.deb
-RUN echo "f9e43d15ff69d65a85d2aade71a43870 libcudnn7-doc_$CUDNN_VERSION-1+cuda10.0_arm64.deb" | md5sum -c - && \
- dpkg -i libcudnn7-doc_$CUDNN_VERSION-1+cuda10.0_arm64.deb && \
- rm libcudnn7-doc_$CUDNN_VERSION-1+cuda10.0_arm64.deb
-
</span></code></pre></div></div>
<p>From VisionWorks we’re going to remove the <code class="language-plaintext highlighter-rouge">dev</code> and <code class="language-plaintext highlighter-rouge">samples</code> packages:</p>
<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code> # NVIDIA VisionWorks Toolkit
COPY --from=dependencies /data/libvisionworks-repo_1.6.0.500n_arm64.deb libvisionworks-repo_1.6.0.500n_arm64.deb
RUN echo "e70d49ff115bc5782a3d07b572b5e3c0 libvisionworks-repo_1.6.0.500n_arm64.deb" | md5sum -c - && \
dpkg -i libvisionworks-repo_1.6.0.500n_arm64.deb && \
apt-key add /var/visionworks-repo/GPGKEY && \
apt-get update && \
<span class="gi">+ apt-get install -y --allow-unauthenticated libvisionworks && \
</span><span class="gd">- apt-get install -y --allow-unauthenticated libvisionworks libvisionworks-dev libvisionworks-samples && \
</span> dpkg --purge libvisionworks-repo && \
rm libvisionworks-repo_1.6.0.500n_arm64.deb && \
apt-get clean && \
<span class="p">@@ -43,7 +61,7 @@</span> RUN echo "647b0ae86a00745fc6d211545a9fcefe libvisionworks-sfm-repo_0.90.4_arm64.
dpkg -i libvisionworks-sfm-repo_0.90.4_arm64.deb && \
apt-key add /var/visionworks-sfm-repo/GPGKEY && \
apt-get update && \
<span class="gi">+ apt-get install -y --allow-unauthenticated libvisionworks-sfm && \
</span><span class="gd">- apt-get install -y --allow-unauthenticated libvisionworks-sfm libvisionworks-sfm-dev && \
</span> dpkg --purge libvisionworks-sfm-repo && \
rm libvisionworks-sfm-repo_0.90.4_arm64.deb && \
apt-get clean && \
<span class="p">@@ -55,30 +73,12 @@</span> RUN echo "7630f0309c883cc6d8a1ab5a712938a5 libvisionworks-tracking-repo_0.88.2_a
dpkg -i libvisionworks-tracking-repo_0.88.2_arm64.deb && \
apt-key add /var/visionworks-tracking-repo/GPGKEY && \
apt-get update && \
<span class="gi">+ apt-get install -y --allow-unauthenticated libvisionworks-tracking && \
</span><span class="gd">- apt-get install -y --allow-unauthenticated libvisionworks-tracking libvisionworks-tracking-dev && \
</span> dpkg --purge libvisionworks-tracking-repo && \
rm libvisionworks-tracking-repo_0.88.2_arm64.deb && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
</code></pre></div></div>
<p>And now remove all of the Python and TensorRT Python support:</p>
<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gd">-RUN apt-get update && apt-get install -y \
- python-dev \
- python-numpy \
- python-pip \
- python-py \
- python-pytest \
- && \
- python -m pip install -U pip && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
-
-# Python2 support for TensorRT
-COPY --from=dependencies /data/python-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb python-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb
-RUN echo "05f07c96421bb1bfc828c1dd3bcf5fad python-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb" | md5sum -c - && \
- dpkg -i python-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb && \
- rm python-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb
-
-COPY --from=dependencies /data/python-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb python-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb
-RUN echo "35febdc63ec98a92ce1695bb10a2b5e8 python-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb" | md5sum -c - && \
- dpkg -i python-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb && \
- rm python-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb
-
-RUN apt-get update && apt-get install -y \
- python3-dev \
- python3-numpy \
- python3-pip \
- python3-py \
- python3-pytest \
- && \
- python3 -m pip install -U pip && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
-
-# Python3 support for TensorRT
-COPY --from=dependencies /data/python3-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb python3-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb
-RUN echo "88104606e76544cac8d79b4288372f0e python3-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb" | md5sum -c - && \
- dpkg -i python3-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb && \
- rm python3-libnvinfer_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb
-
-COPY --from=dependencies /data/python3-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb python3-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb
-RUN echo "1b703b6ab7a477b24ac9e90e64945799 python3-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb" | md5sum -c - && \
- dpkg -i python3-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb && \
- rm python3-libnvinfer-dev_${LIBINFER_PKG_VERSION}+cuda10.0_arm64.deb
-
</span></code></pre></div></div>
<p>OpenCV has dependencies that were installed by <code class="language-plaintext highlighter-rouge">cuda-toolkit-10-0</code> which we now have to install manually and remove the OpenCV python bindings, dev, and samples packages:</p>
<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gi">+## Additional OpenCV dependencies usually installed by the CUDA Toolkit
+
+RUN apt-get update && \
+ apt-get install -y \
+ libgstreamer1.0-0 \
+ libgstreamer-plugins-base1.0-0 \
+ && \
+ apt-get clean && \
+ rm -rf /var/lib/apt/lists/*
+
</span><span class="gd">-## Open CV python binding
-COPY --from=dependencies /data/libopencv-python_${OPENCV_PKG_VERSION}_arm64.deb libopencv-python_${OPENCV_PKG_VERSION}_arm64.deb
-RUN echo "35776ce159afa78a0fe727d4a3c5b6fa libopencv-python_${OPENCV_PKG_VERSION}_arm64.deb" | md5sum -c - && \
- dpkg -i libopencv-python_${OPENCV_PKG_VERSION}_arm64.deb && \
- rm libopencv-python_${OPENCV_PKG_VERSION}_arm64.deb
-
-# Open CV dev
-COPY --from=dependencies /data/libopencv-dev_${OPENCV_PKG_VERSION}_arm64.deb libopencv-dev_${OPENCV_PKG_VERSION}_arm64.deb
-RUN echo "d29571f888a59dd290da2650dc202623 libopencv-dev_${OPENCV_PKG_VERSION}_arm64.deb" | md5sum -c - && \
- dpkg -i libopencv-dev_${OPENCV_PKG_VERSION}_arm64.deb && \
- rm libopencv-dev_${OPENCV_PKG_VERSION}_arm64.deb
-
-# Open CV samples
-COPY --from=dependencies /data/libopencv-samples_${OPENCV_PKG_VERSION}_arm64.deb libopencv-samples_${OPENCV_PKG_VERSION}_arm64.deb
-RUN echo "4f28a7792425b5e1470d5aa73c2a470d libopencv-samples_${OPENCV_PKG_VERSION}_arm64.deb" | md5sum -c - && \
- dpkg -i libopencv-samples_${OPENCV_PKG_VERSION}_arm64.deb && \
- rm libopencv-samples_${OPENCV_PKG_VERSION}_arm64.deb
</span></code></pre></div></div>
<p>At this point we’ve removed everything that we don’t need. Unfortunately we had to keep some samples and dev packages. The NVIDIA <code class="language-plaintext highlighter-rouge">.deb</code> packages require these dev packages instead of just the runtime packages (for reasons I don’t understand). It would be really nice if we had a clean separation of runtime vs dev vs samples. We can now install the DeepStream SDK:</p>
<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gi">+# DeepStream Dependencies
+RUN apt-get update && \
+ apt-get install -y \
+ libssl1.0.0 \
+ libgstreamer1.0-0 \
+ gstreamer1.0-tools \
+ gstreamer1.0-plugins-good \
+ gstreamer1.0-plugins-bad \
+ gstreamer1.0-plugins-ugly \
+ gstreamer1.0-libav \
+ libgstrtspserver-1.0-0 \
+ libjansson4 \
+ libjson-glib-1.0-0 \
+ && \
+ apt-get clean && \
+ rm -rf /var/lib/apt/lists/*
+
+# Additional DeepStream dependencies usually installed by the CUDA Toolkit
+RUN apt-get update && \
+ apt-get install -y \
+ libgstreamer1.0-dev \
+ libgstreamer-plugins-base1.0-dev \
+ && \
+ apt-get clean && \
+ rm -rf /var/lib/apt/lists/*
+
+# DeepStream
+COPY ./deepstream-4.0_4.0-1_arm64.deb /deepstream-4.0_4.0-1_arm64.deb
+RUN dpkg -i /deepstream-4.0_4.0-1_arm64.deb && \
+ rm /deepstream-4.0_4.0-1_arm64.deb
+RUN export LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/tegra:$LD_LIBRARY_PATH
+RUN ldconfig
+WORKDIR /opt/nvidia/deepstream/deepstream-4.0/samples
</span></code></pre></div></div>
<p>DeepStream adds ~<code class="language-plaintext highlighter-rouge">270MB</code> to the image of which ~<code class="language-plaintext highlighter-rouge">214MB</code> is the compiled samples. This container image is still ~<code class="language-plaintext highlighter-rouge">3.79GB</code> and we can trim it down further. This is still a development container which has which we can trim down at a later time. Now that we’ve covered what changed, we can build the image:</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code>, select <code class="language-plaintext highlighter-rouge">make <deepstream 4.0 release></code>, select <code class="language-plaintext highlighter-rouge">32.2-jax-jetpack-4.2.1</code>, press <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make build-32.2-jax-jetpack-4.2.1-deepstream-4.0-release
</code></pre></div></div>
<p>Which runs:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">--squash</span> <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">IMAGE_NAME</span><span class="o">=</span>l4t <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">TAG</span><span class="o">=</span>32.2-jax-jetpack-4.2.1 <span class="se">\</span>
<span class="nt">--build-arg</span> <span class="nv">DEPENDENCIES_IMAGE</span><span class="o">=</span>l4t:32.2-jax-jetpack-4.2.1-deps <span class="se">\</span>
<span class="nt">-t</span> l4t:32.2-jax-jetpack-4.2.1-deepstream-4.0-release <span class="se">\</span>
<span class="nt">-f</span> /home/<user>/dev/jetson-containers/docker/examples/deepstream/32.2-jax-jetpack-4.2.1.Dockerfile <span class="se">\</span>
<span class="nb">.</span> <span class="c"># context path</span>
</code></pre></div></div>
<p>Once the build is complete we can view the images we’ve built running <code class="language-plaintext highlighter-rouge">docker images</code> (output truncated):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>docker images | <span class="nb">grep </span>32.2-jax-jetpack-4.2.1-
l4t 32.2-jax-jetpack-4.2.1-deepstream-4.0-release 3.79GB
l4t 32.2-jax-jetpack-4.2.1-deepstream-4.0-devel 6.28GB
l4t 32.2-jax-jetpack-4.2.1-devel 5.79GB
l4t 32.2-jax-jetpack-4.2.1-runtime 1.23GB
l4t 32.2-jax-jetpack-4.2.1-base 475MB
l4t 32.2-jax-jetpack-4.2.1-deps 3.48GB
</code></pre></div></div>
<h1 id="running-the-samples">Running the Samples</h1>
<p>Copy the image(s) to the device (covered in <a href="/2019/07/pushing-images-to-devices">pushing images to devices</a> or through a container registry.</p>
<p>Assuming you have X environment installed on the device, open a terminal and run <code class="language-plaintext highlighter-rouge">xhost +local:docker</code>. This will let us leverage X11 forwarding from docker on the local machine.</p>
<p>Then either through ssh or in a terminal on the device:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="se">\</span>
<span class="nt">--rm</span> <span class="se">\</span>
<span class="nt">-it</span> <span class="se">\</span>
<span class="nt">-e</span> <span class="s2">"DISPLAY"</span> <span class="se">\</span>
<span class="nt">--net</span><span class="o">=</span>host <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-ctrl <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-ctrl-gpu <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-prof-gpu <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvmap <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-gpu <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-as-gpu <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-vic <span class="se">\</span>
l4t:32.2-jax-jetpack-4.2.1-deepstream-4.0-<release/devel>
</code></pre></div></div>
<p>Try running one of the samples:</p>
<ul>
<li>Xavier: <code class="language-plaintext highlighter-rouge">deepstream-app -c configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt</code></li>
<li>TX2: <code class="language-plaintext highlighter-rouge">deepstream-app -c configs/deepstream-app/source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt</code></li>
<li>TX1: <code class="language-plaintext highlighter-rouge">deepstream-app -c configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt</code></li>
<li>Nano: <code class="language-plaintext highlighter-rouge">deepstream-app -c configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt</code></li>
</ul>
<p>Enjoy the show.</p>
<p>(Fun Note: This container runs on JetPack 4.2 (32.1) based hosts as the images are self-contained, so if you’re waiting for a BSP to update to 32.2 and JetPack 4.2.1, you can still have some fun)</p>Jetson Containers - Maximizing Jetson Nano Dev Kit Storage2019-07-21T06:00:00+00:002019-07-21T06:00:00+00:00http://codepyre.com/2019/07/maximizing-jetson-nano-storage<h1 id="introduction">Introduction</h1>
<p>If you haven’t walked through the <a href="/2019/07/jetson-containers-introduction">first post</a> covering an introduction to Jetson containers, I’d recommend looking at it first. The Nano developer kit is a little harder to work with.</p>
<p>Given the lack of power in the Nano, I recommend building all of the dependencies for the Nano on the <code class="language-plaintext highlighter-rouge">x86_64</code> host and then copying the image to the device (covered in <a href="/2019/07/pushing-images-to-devices">pushing images to devices</a> or through a container registry).</p>
<p>It may be odd that the device is called <code class="language-plaintext highlighter-rouge">nano-dev</code> here, but it is to differentiate between the dev kit module and the production/real module which are different devices with unique device ids.</p>
<h2 id="create-dependencies-image">Create Dependencies Image</h2>
<p>Enter your NVIDIA developer/partner email address into the <code class="language-plaintext highlighter-rouge">.env</code> file using the <code class="language-plaintext highlighter-rouge">NV_USER</code> setting.</p>
<p>Note: This automation requires that you have the latest NVIDIA SDK Manager installed on your system.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">NV_USER</span><span class="o">=</span>your@email.com
</code></pre></div></div>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code> which will drop down a build task list. Select <code class="language-plaintext highlighter-rouge">make <jetpack dependencies></code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>, select <code class="language-plaintext highlighter-rouge">32.2-nano-dev-jetpack-4.2.1-deps</code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make deps-32.2-nano-dev-jetpack-4.2.1
</code></pre></div></div>
<p>Enter your password and wait for the image to be created. For more details, see the <a href="/2019/07/jetson-containers-introduction">first post</a>.</p>
<h2 id="create-the-jetpack-images">Create the JetPack Images</h2>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code> which will drop down a build task list. Select <code class="language-plaintext highlighter-rouge">make <jetpack></code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>, select <code class="language-plaintext highlighter-rouge">32.2-nano-dev-jetpack-4.2.1</code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make make 32.2-nano-dev-jetpack-4.2.1
</code></pre></div></div>
<p>This will build the 32.2 driver pack, <code class="language-plaintext highlighter-rouge">base</code>, <code class="language-plaintext highlighter-rouge">runtime</code>, and <code class="language-plaintext highlighter-rouge">devel</code> images.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>REPOSITORY TAG SIZE
l4t 32.2-nano-dev-jetpack-4.2.1-devel 5.78GB
l4t 32.2-nano-dev-jetpack-4.2.1-runtime 1.22GB
l4t 32.2-nano-dev-jetpack-4.2.1-base 470MB
l4t 32.2-nano-dev 460MB
l4t nano-dev-jetpack-4.2.1-deps 3.55GB
</code></pre></div></div>
<h2 id="build-flashing-container">Build Flashing Container</h2>
<p>To create a reproducible image for flashing, we’re going to create a container which will house the rootfs and all tooling needed to flash the device.</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code> which will drop down a build task list. Select <code class="language-plaintext highlighter-rouge">make <imaging options></code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>, select <code class="language-plaintext highlighter-rouge">32.2-nano-dev-jetpack-4.2.1</code> and hit <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make image-32.2-nano-dev-jetpack-4.2.1
</code></pre></div></div>
<p>This will build an image which contains the root file system and tools for flashing. The root file system is fully configured, has the <code class="language-plaintext highlighter-rouge">nvidia-docker</code> tooling installed, but does not have any of the main JetPack libraries we put those into our container images for the application.</p>
<p>Once complete you should see something similar to:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Successfully built 2bc72a171644
Successfully tagged l4t:32.2-nano-dev-jetpack-4.2.1-image
</code></pre></div></div>
<p>We can see the built image:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>docker images
REPOSITORY TAG SIZE
l4t 32.2-nano-dev-jetpack-4.2.1-image 5.8GB
</code></pre></div></div>
<h2 id="determine-sd-card-size">Determine SD Card Size</h2>
<p>The production Nano will have a <code class="language-plaintext highlighter-rouge">16GB</code> eMMC 5.1 Flash. This drive has a <code class="language-plaintext highlighter-rouge">14GiB</code> capacity (<code class="language-plaintext highlighter-rouge">ROOTFSSIZE</code>); This is static and the flash scripts have these values hard-coded, but we can override them. The <code class="language-plaintext highlighter-rouge">EMMCSIZE</code> describes the size of the drive, even if we are using a MicroSD card. The <code class="language-plaintext highlighter-rouge">ROOTFSSIZE</code> max size is the <code class="language-plaintext highlighter-rouge">EMMCSIZE</code> - <code class="language-plaintext highlighter-rouge">BOOTPARTSIZE</code> where <code class="language-plaintext highlighter-rouge">BOOTPARTSIZE</code> is <code class="language-plaintext highlighter-rouge">8MiB</code> (<code class="language-plaintext highlighter-rouge">BOOTPARTSIZE=8388608</code>).</p>
<p>This works for production modules with eMMCs, but when targeting external drives and MicroSD cards, we have to work harder.</p>
<p>We have three ways of determining the size of our drive:</p>
<ol>
<li>Insert the drive into a computer.</li>
<li>Flash the device normally, then look, and re-flash.</li>
<li>Calculated guessing</li>
</ol>
<p>The reason we have these options is that the MicroSD stated capacities are lies. They are all close, but some are over and some are under what they state, and by how much varies. Despite this, we need to figure this out so that we can get our <code class="language-plaintext highlighter-rouge">EMMCSIZE</code>.</p>
<p>But why don’t we just repartition the drive after flashing? Simple answer is that your device will cease to boot. You’ll have to delete several partitions before being allowed to create a data partition from the extra size, and then your device will be non-functional.</p>
<h3 id="using-parted">Using: Parted</h3>
<p>The first option usually requires an adapter for the MicroSD card. Once inserted, running <code class="language-plaintext highlighter-rouge">sudo parted <<<'unit MiB print all'</code> will give the size of each device in MiB along with some other values. Save the volume size off as we’ll need it for the next steps. For example:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~<span class="nv">$ </span><span class="nb">sudo </span>parted <span class="o"><<<</span><span class="s1">'unit MiB print all'</span>
<span class="o">[</span><span class="nb">sudo</span><span class="o">]</span> password <span class="k">for</span> <user>:
GNU Parted 3.2
<span class="c">#...</span>
Model: SD SN128 <span class="o">(</span>sd/mmc<span class="o">)</span>
Disk /dev/mmcblk0: 121942MiB
Sector size <span class="o">(</span>logical/physical<span class="o">)</span>: 512B/512B
<span class="c">#...</span>
</code></pre></div></div>
<p>For this <code class="language-plaintext highlighter-rouge">128GB</code> drive we see the size reported as <code class="language-plaintext highlighter-rouge">121942MiB</code>. A <code class="language-plaintext highlighter-rouge">128GB</code> drive should be <code class="language-plaintext highlighter-rouge">122070MiB</code> (rounded down as <code class="language-plaintext highlighter-rouge">122070MiB</code> is <code class="language-plaintext highlighter-rouge">127.999672GB</code>). When inspecting the drive, the reported size of this drive is <code class="language-plaintext highlighter-rouge">121942MiB</code>, which is <code class="language-plaintext highlighter-rouge">128MiB</code> smaller than expected. <em>This is why we are measuring.</em></p>
<h3 id="flashing-twice">Flashing Twice</h3>
<p>Flash the device normally, then open the Disks application (or using <code class="language-plaintext highlighter-rouge">parted</code> above) to get the drive size. You’ll have to convert bytes to MiB if using the Disks application.</p>
<p>To flash using the image we just built:</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>./flash/flash.sh l4t:32.2-nano-dev-jetpack-4.2.1-image
</code></pre></div></div>
<p>You’ll see the drive listed at <code class="language-plaintext highlighter-rouge">15GB</code> but the <code class="language-plaintext highlighter-rouge">parted</code> and <code class="language-plaintext highlighter-rouge">Disks</code> sizes will be different.</p>
<h3 id="calculated-guess">Calculated Guess</h3>
<p>I sampled a dozen MicroSD cards and calculated their theoretical vs reported MiB values. With the exception of one drive which was <code class="language-plaintext highlighter-rouge">2.37%</code> low, all other cards were only off <code class="language-plaintext highlighter-rouge">0.1-0.6%</code>. If we convert the card size from GB to MiB and then multiply by <code class="language-plaintext highlighter-rouge">99%</code> (assuming <code class="language-plaintext highlighter-rouge">1%</code> off and low), we get these values:</p>
<p>``
| Card Size | EMMCSIZE | ROOTFSSIZE (with 8MiB offset) |
|—|—|—|
| 512GB | 483398MiB | 483390MiB |
| 400GB | 377655MiB | 377647MiB |
| 256GB | 241682MiB | 241674MiB |
| 128GB | 120849MiB | 120841MiB |
| 64GB | 60424MiB | 60416MiB |
| 32GB | 30211MiB | 30203MiB |</p>
<p>While this should work, I recommend you measure and use the real number.</p>
<h2 id="flashing">Flashing</h2>
<p>Now that we have our <code class="language-plaintext highlighter-rouge">EMMCSIZE</code> and calculated the <code class="language-plaintext highlighter-rouge">ROOTFSSIZE</code> (subtracting <code class="language-plaintext highlighter-rouge">8MiB</code>), we can flash the device.</p>
<p>Set your jumpers for flashing, cycle the power or reboot the device. Ensure that it shows up when you run <code class="language-plaintext highlighter-rouge">lsusb</code> (there will be a device with <code class="language-plaintext highlighter-rouge">Nvidia Corp</code> in the line):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>lsusb
<span class="c">#...</span>
Bus 001 Device 069: ID 0955:7020 NVidia Corp.
<span class="c">#...</span>
</code></pre></div></div>
<p>Now that the device is ready, we can flash it:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># -S : Rootfs size in bytes. KiB, MiB, GiB short hands are allowed</span>
<span class="c"># -e : Target device's eMMC size.</span>
~/jetson-containers<span class="nv">$ </span>./flash/flash.sh l4t:32.2-nano-dev-jetpack-4.2.1-image <span class="nt">-S</span> 121934MiB <span class="nt">-e</span> 121942MiB
</code></pre></div></div>
<p>The device should reboot automatically once flashed. Follow prompts on the device to accept the license terms and configure the environment. You can now use all of the space on your MicroSD card on the Jetson Nano Dev Kit.</p>Jetson Containers - Pushing Images to Devices2019-07-21T06:00:00+00:002019-07-21T06:00:00+00:00http://codepyre.com/2019/07/pushing-images-to-devices<h1 id="introduction">Introduction</h1>
<p>If you haven’t walked through the <a href="/2019/07/jetson-containers-introduction">first post</a> covering an introduction to Jetson containers, I’d recommend looking at it first.</p>
<p>Compiling the CUDA samples for the Nano is really hard <a href="/2019/07/jetson-containers-samples">compared to using the Xavier</a> as it doesn’t have nearly the resources required. We can get around this by compiling the container on the host.</p>
<p>Once you completed <a href="/2019/07/maximizing-jetson-nano-storage#create-dependencies-image">creating the dependencies image</a> and <a href="/2019/07/maximizing-jetson-nano-storage#create-the-jetpack-images">creating the JetPack images</a>, we can build the samples.</p>
<h1 id="building-the-samples">Building the Samples</h1>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code>, select <code class="language-plaintext highlighter-rouge">make <build samples></code>, select <code class="language-plaintext highlighter-rouge">build-32.2-nano-dev-jetpack-4.2.1-samples</code>, press <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make build-32.2-nano-dev-jetpack-4.2.1-samples
</code></pre></div></div>
<p>Which runs:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">--build-arg</span> <span class="nv">IMAGE_NAME</span><span class="o">=</span>l4t <span class="se">\</span>
<span class="nt">-t</span> l4t:32.2-nano-dev-jetpack-4.2.1-samples <span class="se">\</span>
<span class="nt">-f</span> /home/<user>/dev/jetson-containers/docker/examples/samples/Dockerfile <span class="se">\</span>
<span class="nb">.</span>
</code></pre></div></div>
<p>At the end we should have:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>docker images
REPOSITORY TAG SIZE
l4t 32.2-nano-dev-jetpack-4.2.1-samples 2.34GB
</code></pre></div></div>
<p>Assuming you’ve followed the device setup in the <a href="/2019/07/jetson-containers-introduction">first post</a>, we can now push this image to the device. This will save a lot of time compared to pushing to a container registry and then pulling the image down.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker save l4t:32.2-nano-dev-jetpack-4.2.1-samples | ssh user@host <span class="s1">'docker load'</span>
<span class="c"># Or if you have pv installed, it can be used to monitor progress.</span>
docker save l4t:32.2-nano-dev-jetpack-4.2.1-samples | pv | ssh user@host <span class="s1">'docker load'</span>
</code></pre></div></div>
<p>Once completed, the samples image will be available on the device. Set the <code class="language-plaintext highlighter-rouge">DOCKER_HOST</code> variable in the <code class="language-plaintext highlighter-rouge">.env</code> file to proxy the run to the device: <code class="language-plaintext highlighter-rouge">DOCKER_HOST=ssh://<user>@<device>/<ip></code>. To run the image:</p>
<p>UI:</p>
<p>Press <code class="language-plaintext highlighter-rouge">Ctrl+Shift+B</code>, select <code class="language-plaintext highlighter-rouge">make <run samples></code>, select <code class="language-plaintext highlighter-rouge">run-32.2-nano-dev-jetpack-4.2.1-samples</code>, press <code class="language-plaintext highlighter-rouge">Enter</code>.</p>
<p>Terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers<span class="nv">$ </span>make run-32.2-nano-dev-jetpack-4.2.1-samples
</code></pre></div></div>
<p>Which runs:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="se">\</span>
<span class="nt">--rm</span> <span class="se">\</span>
<span class="nt">-it</span> <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-ctrl <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-ctrl-gpu <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-prof-gpu <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvmap <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-gpu <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-as-gpu <span class="se">\</span>
<span class="nt">--device</span><span class="o">=</span>/dev/nvhost-vic <span class="se">\</span>
l4t:32.2-nano-dev-jetpack-4.2.1-samples
</code></pre></div></div>
<p>Starting in JetPack 4.2.1, the <code class="language-plaintext highlighter-rouge">nvidia-docker</code> runtime is installed on the device. This isn’t available through <code class="language-plaintext highlighter-rouge">DOCKER_HOST</code> proxying. Open an SSH session to the device.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/jetson-containers$ ssh user@device
user@nano-dev:~$ nvidia-docker run --rm -it l4t:32.2-nano-dev-jetpack-4.2.1-samples ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "NVIDIA Tegra X1"
CUDA Driver Version / Runtime Version 10.0 / 10.0
CUDA Capability Major/Minor version number: 5.3
Total amount of global memory: 3956 MBytes (4148543488 bytes)
( 1) Multiprocessors, (128) CUDA Cores/MP: 128 CUDA Cores
GPU Max Clock rate: 922 MHz (0.92 GHz)
Memory Clock rate: 13 Mhz
Memory Bus Width: 64-bit
L2 Cache Size: 262144 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: Yes
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 10.0, NumDevs = 1
Result = PASS
</code></pre></div></div>
<p>Now we have a quick way to build images on the <code class="language-plaintext highlighter-rouge">x86_64</code> host and push directly to the device.</p>