• tias@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    14 hours ago

    Let’s do some estimates:

    • An 8x H100 machine costs about $20 / hr to rent.
    • With a 70B model with 4K context, a H100 node can do about 300 requests in parallel.
    • A single response takes around 30 seconds to generate.
    • An average user sends about 300 messages / month.

    The throughput of a node is

    300 concurrent * (3600 / 30) = 36 000 messages / hour.

    The cost per message, then, is $20 / 36 000 = $.00055…

    With 300 messages per month, the compute cost for the AI vendor is 300*$20/36000 = $0.16 / month per user. By contrast, a subscription costs $20.

    So given these assumptions, it’s other things (like R&D, safety research, training runs, free accounts, etc) that represent the bulk of the cost and those could be scaled down to turn a profit. What will they do? Give how hyped AI is currently and the competitive landscape, I don’t think they’ll increase prices that much. We have products like DeepSeek on the horizon which are much cheaper, so it’s more likely that they squeeze money out of it by becoming more efficient.

    • B0rax@feddit.org
      link
      fedilink
      arrow-up
      3
      ·
      13 hours ago

      Well that entirely depends on your users… coding agents or in general agents that run for hours will crash your calculation

    • PetteriPano@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      10 hours ago

      It’s a weird market.

      Those H100s are $25k minimum. So $200,000 just in GPUs. Drawing 700W each, or 5.6kW total. At my local prices that’s about a dollar per hour just for electricity.

      It’s going to take you a couple of years to break even at $20/h. They might still hold some value at that point. Or they might be obsolete.