A Chinese company’s publication of AI-enhanced satellite images of US bases in the Middle East is helping Iranian forces identify targets, US intelligence believes.

The ABC has been briefed on the intelligence by a source inside US defence, who says the images are endangering lives.

Chinese geospatial artificial intelligence and software company MizarVision, which the Chinese government has a small ownership stake in, has been publishing detailed satellite images with tagging data of multiple US military sites in the lead-up to, and during, the Iran war.

The imagery showcases an AI tool that identifies and tags military forces across vast areas, a capability that once required the resources of a national intelligence agency.

  • morto@piefed.social
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    12 hours ago

    Oh, that kind of super-resolution has been gaining media attention, but there’s much more beyond the “ai”. There are several mathematical methods, based on inverting a point-spread-function, statistical methods, super-resolution based on extracting subpixel information from a sequence of low resolution images, and several other methods and approaches, including the use of machine learning, but not in the generative way. It’s a very diverse and complex field of research

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      super-resolution based on extracting subpixel information from a sequence of low resolution images

      So basically DLSS for spy satellites. Kinda neat.

      • testaccount372920@piefed.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        DLSS is essentially an advanced interpolation algorithm, it makes a guess of what should be in between two know pixel values. This can be very useful for human operators who need to look at the data. It also has the advantage that you only need a trained model and one image frame at a time. Some ‘superresolution’ methods essentially do this, but ideally you don’t use this until after you’ve applied mathematically correct techniques.

        Superresolution methods exist in many forms. Basically all of them require either some prior knowlegde (or assumption) of what you’re looking at or it takes a lot of data. But once you have this, you can go beyond the optical resolution of your system in a mathematically correct way, you don’t have to guess!

        Some examples:

        • Lens correction: it’s possible to determine how imperfections in your lens affect the image, then correct for this. With this prior lens knowledge your images will be nearly as good as those from a theoretical perfect lens. However, you’re still limited by the (diffraction limit) laws of physics, regardless of how (im)perfect your lens is.
        • Deconvolution: from physics it’s known how light diffracts (bends) and how this leads to optical limitations. Through deconvolution you can undo this. This takes a lot of guess work to find the correct solution, but once you have the solution, you can check that it’s mathematically correct (it’s a bunch of fancy integrals).
        • Using information of multiple pixels v1: if an object in your image consists or more than one pixel, you have more information to determine where exactly this object is. If you know the shape of the object (e.g. a circle) you can make a fit to it and determine some properties extremely accurately (e.g. the circle center of a 1 μm particle can routinely be determined to a 10 nm resolution by a microscope that has an optical resolution of 200 nm). This method requires prior knowledge of the shape! Planes and oil storage tanks have known shapes…
        • Using information of multiple pixels v2: theoretically you just need more information to go beyond the optical resolution. This can be done by taking many images (from slightly different positions?) of the same field of view. I don’t know how this works, but I have no doubt that there are people doing this.
      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 hours ago

        No, not exactly. More like how astrophotographers will stack images to compensate for imaging defects. After all, the Hubble was a variant of an NSA spy satellite.