cultural reviewer and dabbler in stylistic premonitions

  • 192 Posts
  • 412 Comments
Joined 4 years ago
cake
Cake day: January 17th, 2022

help-circle
  • So in summary. You’re right. Sealed sender is not a great solution. But

    Thanks :)

    But, I still maintain it is entirely useless - its only actual use is to give users the false impression that the server is unable to learn the social graph. It is 100% snake oil.

    it is a mitigation for the period where those messages are being accepted.

    It sounds like you’re assuming that, prior to sealed sender, they were actually storing the server-visible sender information rather than immediately discarding it after using it to authenticate the sender? They’ve always said that they weren’t doing that, but, if they were, they could have simply stopped storing that information rather than inventing their “sealed sender” cryptographic construction.

    To recap: Sealed sender ostensibly exists specifically to allow the server to verify the sender’s permission to send without needing to know the sender identity. It isn’t about what is being stored (as they could simply not store the sender information), it is about what is being sent. As far as I can tell it only makes any sense if one imagines that a malicious server somehow would not simply infer the senders’ identities from their (obviously already identified) receiver connections from the same IPs.


  • Sure. If a state serves a subpoena to gather logs for metadata analysis, sealed sender will prevent associating senders to receivers, making this task very difficult.

    Pre sealed-sender they already claimed not to keep metadata logs, so, complying with such a subpoena[1] should already have required them to change the behavior of their server software.

    If a state wanted to order them to add metadata logging in a non-sealed-sender world, wouldn’t they also probably ask them to log IPs for all client-server interactions (which would enable breaking sealed-sender through a trivial correlation)?

    Note that defeating sealed sender doesn’t require any kind of high-resolution timing or costly analysis; with an adversary-controlled server (eg, one where a state adversary has compelled the operator to alter the server’s behavior via a National Security Letter or something) it is easy to simply record the IP which sent each “sealed” message and also record which account(s) are checked from which IPs at all times.


    1. it would more likely be an NSL or some other legal instrument rather than a subpoena ↩︎


  • sealed sender isn’t theater, in my view. It is a best effort attempt to mitigate one potential threat

    But, what is the potential threat which is mitigated by sealed sender? Can you describe a specific attack scenario (eg, what are the attacker’s goals, and what capabilities do you assume the attacker has) which would be possible if Signal didn’t have sealed sender but which is no longer possible because sealed sender exists?


  • In case it wasn’t clear, I’m certainly not advocating for using WhatsApp or any other proprietary, centralized, or Facebook-operated communication systems 😂

    But I do think Facebook probably really actually isn’t exploiting the content of the vast majority of whatsapp traffic (even if they do turn out to be able to exploit it for any specific users at any time, which i wouldn’t be surprised by).


  • “Anonymity” is a vague term which you introduced to this discussion; I’m talking about metadata privacy which is a much clearer concept.

    TLS cannot prevent an observer from seeing the source and destination IPs, but it does include some actually-useful metadata mitigations such as Encrypted Client Hello, which encrypts (among other things) the Server Name Indicator. ECH a very mild mitigation, since the source and destination IPs are intrinsically out of scope for protection by TLS, but unlike Sealed Sender it is not an entirely theatrical use of cryptography: it does actually prevent an on-path observer from learning the server hostname (at least, if used alongside some DNS privacy system).

    The on path part is also an important detail here: the entire world’s encrypted TLS traffic is not observable from a single choke point the way that the entire world’s Signal traffic is.



  • Arthur Besse@lemmy.mltoOpen Source@lemmy.mlBest apps for private messaging
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Signal protocol is awesome for privacy, not anonymity

    The “privacy, not anonymity” dichotomy is some weird meme that I’ve seen spreading in privacy discourse in the last few years. Why would you not care about metadata privacy if you care about privacy?

    Signal is not awesome for metadata privacy, and metadata is the most valuable data for governments and corporations alike. Why do you think Facebook enabled e2ee after they bought WhatsApp? They bought it for the metadata, not the message content.

    Signal pretends to mitigate the problem it created by using phone numbers and centralizing everyone’s metadata on AWS, but if you think about it for just a moment (see linked comment) the cryptography they use for that doesn’t actually negate its users’ total reliance on the server being honest and following their stated policies.

    Signal is a treasure-trove of metadata of activists and other privacy-seeking people, and the fact that they invented and advertise their “sealed-sender” nonsense to pretend to blind themselves to it is an indicator that this data is actually being exploited: Signal doth protest too much, so to speak.






  • I don’t think anyone called those “web apps” though. I sure didn’t.

    As I recall, the phrase didn’t enter common usage until the advent of AJAX, which allowed for dynamically loading data without loading or re-loading a whole page. Early webmail sites simply loaded a new page every time you clicked a link. They didn’t even need JavaScript.

    The term “web app” hadn’t been coined yet but, even without AJAX I think in retrospect it’s reasonable to call things like the early versions of Hotmail and RocketMail applications - they were functional replacements for a native application, on the web, even though they did require a new page load for every click (or at least every click that required network interaction).

    At some point, though, I’m pretty sure that some clicks didn’t require server connections, and those didn’t require another page load (at least if js was enabled): this is what “DHTML” originally meant: using JavaScript to modify the DOM client-side, in the era before sans-page-reload network connections were technically possible.

    The term DHTML definitely predates AJAX and the existence of XMLHTTP (later XMLHttpRequest), so it’s also odd that this article writes a lot about the former while not mentioning the latter. (The article actually incorrectly defines DHTML as making possible “websites that could refresh interactive data without the need for a page reload” - that was AJAX, not DHTML.)





  • Arthur Besse@lemmy.mlMtoMemes@lemmy.mlPolitics 101
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    13 days ago

    Not sure what you are saying. With the order of the meme reversed it doesn’t make it obvious which point is supposed the clearer point of view…

    It isn’t reversed compared to how this meme format is usually used: the glasses-on image is on the bottom, and associated with the viewpoint OP is saying is correct/better.

    If one hasn’t seen (or has forgotten) the film, this is the way that makes sense, since glasses (generally) improve the wearer’s vision.

    This meme’s canonical format is however in fact at odds with the actual scene in the 2002 film:

    peter parker glasses meme, but reversed so he is wearing glasses in the top frame instead of the bottom. bottom text "In the movie Spiderman, Peter Parker realizes he can see more clearly without his glasses so the order oftthe images should be flipped", top text is the same but blurry

    A related meme form which doesn’t have this ambiguity is the much older they live sunglasses - here the position of the two images are used less consistently (though as with peter parker, usually glasses-on is the lower one) but the glasses being on showing the truth actually fits with how it is in the film.









  • Arthur Besse@lemmy.mltoProgrammer Humor@programming.devThere was no other way!
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    1 month ago

    I’m going to need you to drop a source that will take me less than five minutes to understand

    that sounds like sealioning 🤡 but i'll bite, once:

    are you asking for evidence that lunduke is queerphobic, or that the rust community has a disproportionate number of queer people in it?

    or, do you acknowledge both of those things, and are actually suggesting that lunduke’s vehement opposition to rust could maybe conceivably be entirely coincidental and perhaps he dislikes it for purely technical reasons? 😂

    in any case, i’m not going to link to lunduke but i just checked and confirmed that (as i assumed) if you simply search his twitter for the word rust you can find many tweets (and i only went back a month) where he is in fact complaining about people being queer.


  • just post it on lemmy world as a meme, copypaste a comment that makes the code better along with the original code into the AI agent

    I’m curious if you succeeded with this approach here - have you gotten your LLM to produce a bash function which you can use without needing to understand how to specify an ffmpeg filename pattern yet?

    btw, if want to try learning the old-fashioned way, have a look at man ffmpeg-formats where you can find perhaps-useful information like this:
       segment, stream_segment, ssegment
           Basic stream segmenter.
    
           This  muxer  outputs  streams  to  a number of separate files of nearly
           fixed duration. Output filename pattern can be set in a fashion similar
           to image2, or by using a "strftime" template if the strftime option  is
           enabled.
    
           "stream_segment"  is  a variant of the muxer used to write to streaming
           output formats, i.e. which  do  not  require  global  headers,  and  is
           recommended  for  outputting  e.g.  to  MPEG transport stream segments.
           "ssegment" is a shorter alias for "stream_segment".
    
           Every segment starts with a keyframe of the selected reference  stream,
           which is set through the reference_stream option.
    
           Note  that if you want accurate splitting for a video file, you need to
           make the input key frames  correspond  to  the  exact  splitting  times
           expected  by  the  segmenter,  or  the segment muxer will start the new
           segment with the key frame found next after the specified start time.
    
           The segment muxer works best with a single constant frame rate video.
    
           Optionally it can generate a list of the created segments,  by  setting
           the   option   segment_list.   The   list  type  is  specified  by  the
           segment_list_type option. The entry filenames in the segment  list  are
           set by default to the basename of the corresponding segment files.
    
           See  also  the hls muxer, which provides a more specific implementation
           for HLS segmentation.
    
           Options
    
           The segment muxer supports the following options:
    
    [...]
    

    From the image2 section, here is how the filename pattern works:

               sequence
                   Select  a  sequence pattern type, used to specify a sequence of
                   files indexed by sequential numbers.
    
                   A sequence pattern may contain the string "%d" or "%0Nd", which
                   specifies  the  position  of  the  characters  representing   a
                   sequential  number  in each filename matched by the pattern. If
                   the form "%d0Nd" is used, the string representing the number in
                   each filename is 0-padded and N is the total number of 0-padded
                   digits representing the number. The literal character  '%'  can
                   be specified in the pattern with the string "%%".
    
                   If  the  sequence  pattern  contains  "%d" or "%0Nd", the first
                   filename of the file list specified by the pattern must contain
                   a  number  inclusively  contained  between   start_number   and
                   start_number+start_number_range-1,   and   all   the  following
                   numbers must be sequential.
    
                   For example the pattern "img-%03d.bmp" will match a sequence of
                   filenames  of   the   form   img-001.bmp,   img-002.bmp,   ...,
                   img-010.bmp,  etc.;  the  pattern "i%%m%%g-%d.jpg" will match a
                   sequence of filenames of  the  form  i%m%g-1.jpg,  i%m%g-2.jpg,
                   ..., i%m%g-10.jpg, etc.
    

    And btw, the ffmpeg-formats manual does also include examples:

           Examples
    
           •   Remux the content of file in.mkv to a list of segments out-000.nut,
               out-001.nut, etc., and write the  list  of  generated  segments  to
               out.list:
    
                       ffmpeg -i in.mkv -codec hevc -flags +cgop -g 60 -map 0 -f segment -segment_list out.list out%03d.nut
    
           •   Segment  input  and  set  output  format  options  for  the  output
               segments:
    
                       ffmpeg -i in.mkv -f segment -segment_time 10 -segment_format_options movflags=+faststart out%03d.mp4
    
           •   Segment the input file according to the split points  specified  by
               the segment_times option:
    
                       ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
    
           •   Use  the  ffmpeg force_key_frames option to force key frames in the
               input at the specified location, together with the  segment  option
               segment_time_delta  to account for possible roundings operated when
               setting key frame times.
    
                       ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -codec:v mpeg4 -codec:a pcm_s16le -map 0 \
                       -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut
    
               In order to force key frames on  the  input  file,  transcoding  is
               required.
    
           •   Segment the input file by splitting the input file according to the
               frame numbers sequence specified with the segment_frames option:
    
                       ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_frames 100,200,300,500,800 out%03d.nut
    
           •   Convert  the  in.mkv  to  TS segments using the "libx264" and "aac"
               encoders:
    
                       ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a aac -f ssegment -segment_list out.list out%03d.ts
    
           •   Segment the input file, and create an M3U8 live  playlist  (can  be
               used as live HLS source):
    
                       ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
                       -segment_list_flags +live -segment_time 10 out%03d.mkv
    
    

    It is actually possible to figure out how to do this and many other ffmpeg tasks even without internet access :)