<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Rust: reading very large files for the billion row challenge</title>
        <link>https://video.infosec.exchange/videos/watch/869a3df8-2b78-476c-8771-03ab778b1fb3</link>
        <description>Checking out whether memmap can help us read very large files as fast as possible, and wondering how wc manages to be so fast. This is the next bit of the Billion Row Challenge, and probably the closest part to black magic. Read my blog at https://artificialworlds.net/blog Follow me on mastodon: @andybalaam@mastodon.social</description>
        <lastBuildDate>Fri, 10 Apr 2026 11:24:26 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>PeerTube - https://video.infosec.exchange</generator>
        
        <copyright>All rights reserved, unless otherwise specified in the terms specified at https://video.infosec.exchange/about and potential licenses granted by each content's rightholder.</copyright>
        <atom:link href="https://video.infosec.exchange/feeds/video-comments.xml?videoId=869a3df8-2b78-476c-8771-03ab778b1fb3" rel="self" type="application/rss+xml"/>
        <item>
            <title><![CDATA[Rust: reading very large files for the billion row challenge - Andy Balaam]]></title>
            <link>https://video.infosec.exchange/w/hC32rdos46nZkXCtYvkGzi;threadId=262847</link>
            <guid>https://video.infosec.exchange/w/hC32rdos46nZkXCtYvkGzi;threadId=262847</guid>
            <pubDate>Fri, 10 Apr 2026 08:49:09 GMT</pubDate>
            <content:encoded><![CDATA[<p>@sebsch@chaos.social thanks that sounds quite likely. Maybe something to investigate if I want to squeeze out the max performance.</p>
]]></content:encoded>
            <dc:creator>Andy Balaam</dc:creator>
        </item>
        <item>
            <title><![CDATA[Rust: reading very large files for the billion row challenge - sebsch]]></title>
            <link>https://video.infosec.exchange/w/hC32rdos46nZkXCtYvkGzi;threadId=262847</link>
            <guid>https://video.infosec.exchange/w/hC32rdos46nZkXCtYvkGzi;threadId=262847</guid>
            <pubDate>Fri, 10 Apr 2026 08:35:10 GMT</pubDate>
            <content:encoded><![CDATA[<p><span><a href="https://video.infosec.exchange/a/andybalaam/video-channels" class="u-url mention" target="_blank" rel="noopener noreferrer">@<span>andybalaam</span></a></span> i did not have a chance looking at your video. </p><p>But I think one msin reason for the immense throughput of the GNU tools is, they operating directly on file descriptors. </p><p>Rust has nice Linux integrations with <a href="https://docs.rs/nix/latest/nix/" target="_blank" rel="noopener noreferrer"><span>https://</span><span>docs.rs/nix/latest/nix/</span><span></span></a> if you want to handle the raw fds in a save manner.</p>]]></content:encoded>
            <dc:creator>sebsch</dc:creator>
        </item>
    </channel>
</rss>