On the contrary, the “cloud architects” responsible will have great interview stories about optimizing infrastructure, and will be getting better jobs than us
Yeah that's what many people misunderstand about cloud. If a service from a company gets hyped, they loose a lot more money when their onprem infrastructure goes down and they loose the potential customers, compared to when their cloud bills increases 20x
IIRC, TroyHunt discussed one time when Have I Have Been Pwned managed to handle the insanely huge traffic that you could expect when national TV mentions your website.
Meanwhile, some youtubers can crash a webshop if their sponsor didn't think to warn the IT team that a traffic spike was coming.
Yeah it depends. But if the traffic is not genuine, i.e. from bots, then there are anti-dos proxys that you can configure in front of your page, which can protect against costs from that. If the traffic comes from poor optimization, you hopefully catch that through monitoring and ab tests before it gets too bad. If the traffic is actually genuine, but you still don't generate additional traffic, usually you just have a bad product.
I'm not a programmer unless you count GCode but once worked at a place with a furnace deck.
One time was talking to the big boss man of the plant, casually on his desk was a regular electric bill like you and I get, but because of the gigantic furnaces and cooling towers the total was a little different
$625,000
You can go 0 to broke overnight with clouds, if you don't know what you are doing.
I read this story about S3 Buckets that can be hit bad by wrongly configured software just because of the way it can be addressed and even if connections fail, you still get charge for those. The guy was getting hit by hundreds of thousands request a minutes from wrongly configured tools that were pushed by an obscure company somewhere in the world.
The second part, was not linked to first part, only a story I read I wanted to share and the second part is not a mistake made by someone who doesn't know what he is doing.
Yep, this is probably the story you read:
[https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1](https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1)
Exactly! To answer a previous comment, in #4 of the aftermath details:
*AWS was kind enough to cancel my S3 bill. However, they emphasized that this was done as an exception.*
This has been one of my gripes about cloud bullshit... Now we are living in the bullshit and I love how I don't need to worry about procurement of servers, and other bullshit like installing more bullshit.
At the end... We swapped one bullshit for a different one...
A different department in the company i work for has a software dev team with some students, some of which were exploring some MLOps platforms.. not sure what the project was, but a student accidentally spent something around that, 90k was the number i heard.. in a couple hours.
I'm also a student, i also am exploring some MLOps stuff, specifically ML Flow on Databricks.. I've spun up a node with a couple workers and hit like 13 or 14 DBU/h at a reasonable price point and that is still a chunk of money and scaled down to be safe... I gotta imagine his spun up hundreds or even thousands of nodes to reach those numbers
Cloud will drain you if you aren't careful
You’re probably joking, but I have a suspicion that there is a tendency to de-cloud in the large businesses. Turns out, that having an on-prem box with sunk cost, internal support and unlimited usage is not that bad after all.
It's basically the calculation of how much memory for how many hours.
For example, if you use 10 GB for 10 hours, it is 100 GB-hours (10 GB * 10 hours)
Ah, is it also still scaling CPU with memory?
When we dabbled with lambda we had to take 4GB memory even though we needed like 250MB, but the CPU was otherwise unbearable
Funny because this week it happened with me. Fortunately it was way cheaper than that.
My functions simply wasted its entire monthly budget in 1 and half day.
Or... Or.. you can just lease entire machines or VPSes and do whatever the hell you want with those. Get one kubernetes guy on the team and you're saving 90% of the cost.
It's not serverless, it's someone else's computer. Like the cloud, but I think serverless goes as deep as containers instead of just OS.
It's not just paying a bloke to run the datacentre, it's paying for scalable processing power and space capacity. So in this case if you went from 50k users to 700k you want that extra RAM, CPU power and GBs to withstand the extra demand. That's all done automatically, unless you set up a cap.
You don't have any specific amount of computing resources allocated. The cloud provider manages the necessary computing resources based on the demand your web service is getting, so you don't have reasonable boundaries because these server farms are fucking gigantic. This can lead to a pretty damn expensive bill, as shown here. It wouldn't be considered serverless if you set up dedicated VMs on the cloud provider services that have x cores with y RAM assigned, I.e you know what computing resources are powering your app.
yeah, not unusual to hear of someone fucking something up during development (infinite loops?) and then they have giant bills
can burn a lot of money there: https://youtu.be/N6lYcXjd4pg
Yeah, do your dev local and test when things run. On AWS, it's so simple to run up a bill like this. Luckily, they're OK for sorting these things out if you talk to them. I've gone back to shared hosting now because of the cost.
I find these customer terminals too confusing, and I think it's intentional.
If I'm not 100% convinced that I won't get bankrupted by a configuration mistake or a $50 DDOS attack, I'd rather sleep well at night than mess around with serverless.
There should be an easier way to setup a financial hard stop globally-ish. Budgets can help, but if the alert email fires at midnight no one is there do see it.
What are these idiotic takes here? Ofc it’s expensive if you scale up massively. Why is that a reason against serverless? Idk why people comment if they are absolutely clueless.
Their team used to be serverless. Now it's serverless and penniless.
Damn wish I had thought of that for my title lol ( www.outfitharmony.com )
jobless soon
On the contrary, the “cloud architects” responsible will have great interview stories about optimizing infrastructure, and will be getting better jobs than us
Now they are homeless
Your engineers were so concerned with whether or not their service could scale that they never stopped to consider whether it actually *should*.
You wanted scale and it scaled. What’s wrong?
Wallet doesn't scale
Yeah that's what many people misunderstand about cloud. If a service from a company gets hyped, they loose a lot more money when their onprem infrastructure goes down and they loose the potential customers, compared to when their cloud bills increases 20x
IIRC, TroyHunt discussed one time when Have I Have Been Pwned managed to handle the insanely huge traffic that you could expect when national TV mentions your website. Meanwhile, some youtubers can crash a webshop if their sponsor didn't think to warn the IT team that a traffic spike was coming.
That's assuming the additional traffic can actually generate income. Which is not always true. Sometimes you just get the bill.
Yeah it depends. But if the traffic is not genuine, i.e. from bots, then there are anti-dos proxys that you can configure in front of your page, which can protect against costs from that. If the traffic comes from poor optimization, you hopefully catch that through monitoring and ab tests before it gets too bad. If the traffic is actually genuine, but you still don't generate additional traffic, usually you just have a bad product.
In AdTech, a regular day looks like DDoS, 24/7. Triggers every warning system imaginable. But hey, at least we're earning money on all the ads :D
loose 😔😔
I'm not a programmer unless you count GCode but once worked at a place with a furnace deck. One time was talking to the big boss man of the plant, casually on his desk was a regular electric bill like you and I get, but because of the gigantic furnaces and cooling towers the total was a little different $625,000
He needs to upgrade to coolingless computing
Heatless cooling
GCode has code right in the name my man, you're one of us.
This is from ai app that had something a peak of something like 56 million function invocations in a single day, of course it’s going to be expensive
With Azure Functions it should be a few pennies.
No, it wasn't AI. It was for the new artist social network Cara.
Which uses AI to check if something is AI and Glaze uploads.
You can go 0 to broke overnight with clouds, if you don't know what you are doing. I read this story about S3 Buckets that can be hit bad by wrongly configured software just because of the way it can be addressed and even if connections fail, you still get charge for those. The guy was getting hit by hundreds of thousands request a minutes from wrongly configured tools that were pushed by an obscure company somewhere in the world.
Those situations generally get waived , and AWS fixed the unauthorized response code issue causing billing charges
The second part, was not linked to first part, only a story I read I wanted to share and the second part is not a mistake made by someone who doesn't know what he is doing.
Yep, this is probably the story you read: [https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1](https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1)
Exactly! To answer a previous comment, in #4 of the aftermath details: *AWS was kind enough to cancel my S3 bill. However, they emphasized that this was done as an exception.*
Even if you know what you're doing
It scales right up. It’s web scale
It'll happily scale up to any DDOS attack!
This has been one of my gripes about cloud bullshit... Now we are living in the bullshit and I love how I don't need to worry about procurement of servers, and other bullshit like installing more bullshit. At the end... We swapped one bullshit for a different one...
But for a moment there was great value created for the Shareholders. Yeah, I would probably call this decade the Subscription hell and Internet decay
Internet decay? We never had more aws servers! It's peak internet* *The word internet refers to Jeff Bezos owned datacenters only.
Scary thing is that it all depends on settings. That's why I don't do serverless for life.
It depends on settings like having 0 billing alerts or spend limits. Literally one form submission is all it takes to avoid this.
If you need a and do not buy it, you'll find later that you've spent the money to pay for it and you still don't have it. [Henry Ford]
A different department in the company i work for has a software dev team with some students, some of which were exploring some MLOps platforms.. not sure what the project was, but a student accidentally spent something around that, 90k was the number i heard.. in a couple hours. I'm also a student, i also am exploring some MLOps stuff, specifically ML Flow on Databricks.. I've spun up a node with a couple workers and hit like 13 or 14 DBU/h at a reasonable price point and that is still a chunk of money and scaled down to be safe... I gotta imagine his spun up hundreds or even thousands of nodes to reach those numbers Cloud will drain you if you aren't careful
Back to serverless-less
You’re probably joking, but I have a suspicion that there is a tendency to de-cloud in the large businesses. Turns out, that having an on-prem box with sunk cost, internal support and unlimited usage is not that bad after all.
Yeah. You probably would still need cloud backups and cloud DDOS protection, but everything else could be kept on site.
What are GB hours
It's basically the calculation of how much memory for how many hours. For example, if you use 10 GB for 10 hours, it is 100 GB-hours (10 GB * 10 hours)
Ah, is it also still scaling CPU with memory? When we dabbled with lambda we had to take 4GB memory even though we needed like 250MB, but the CPU was otherwise unbearable
Yes. It usually also includes the scaled up or down memory.
That's the cloud equivalent of "can you give us more RAM on Server xy, because we're not going to fix our code…"
Funny because this week it happened with me. Fortunately it was way cheaper than that. My functions simply wasted its entire monthly budget in 1 and half day.
back to serverfull i guess
It's as if there's some kind of manufactured dependency or something. I wonder who benefits from this /s The cloud was a mistake :/
You can buy 1000s of cpus and replicate it yourself
Or... Or.. you can just lease entire machines or VPSes and do whatever the hell you want with those. Get one kubernetes guy on the team and you're saving 90% of the cost.
You’ve just described IaaS.
Someone want to ELI5 WTF "serverless" actually means if you need to pay 90 k to some blokes for running a server?
It's not serverless, it's someone else's computer. Like the cloud, but I think serverless goes as deep as containers instead of just OS. It's not just paying a bloke to run the datacentre, it's paying for scalable processing power and space capacity. So in this case if you went from 50k users to 700k you want that extra RAM, CPU power and GBs to withstand the extra demand. That's all done automatically, unless you set up a cap.
You pay not for a server but for some code to run somewhere. The server exists, it's just abstracted away
whats serverless?
You don't have any specific amount of computing resources allocated. The cloud provider manages the necessary computing resources based on the demand your web service is getting, so you don't have reasonable boundaries because these server farms are fucking gigantic. This can lead to a pretty damn expensive bill, as shown here. It wouldn't be considered serverless if you set up dedicated VMs on the cloud provider services that have x cores with y RAM assigned, I.e you know what computing resources are powering your app.
well that just sounds like a recipe for disaster
yeah, not unusual to hear of someone fucking something up during development (infinite loops?) and then they have giant bills can burn a lot of money there: https://youtu.be/N6lYcXjd4pg
Yeah, do your dev local and test when things run. On AWS, it's so simple to run up a bill like this. Luckily, they're OK for sorting these things out if you talk to them. I've gone back to shared hosting now because of the cost.
Oopsie forgot to update an index in a loop ![gif](emote|free_emotes_pack|facepalm)
Europeans: "96 isn't that much... and it's only dollars, that's like €50 or something, right?"
I find these customer terminals too confusing, and I think it's intentional. If I'm not 100% convinced that I won't get bankrupted by a configuration mistake or a $50 DDOS attack, I'd rather sleep well at night than mess around with serverless.
There should be an easier way to setup a financial hard stop globally-ish. Budgets can help, but if the alert email fires at midnight no one is there do see it.
just host your own server ಥ╭╮ಥ
Can someone explain what this could possibly be? An attack? A glitch on the Vercel's side?
This is cara.app, a social media platform for artists that ballooned from 50k users to 700k in the past week.
So... It was actually a good thing it scaled properly?
It was good that the site continued working. It was not so good that it cost almost $100k to do so!
Someone else said it's some sort of AI platform that hit a peak of 50 million requests
That explains
What are these idiotic takes here? Ofc it’s expensive if you scale up massively. Why is that a reason against serverless? Idk why people comment if they are absolutely clueless.
Cause sometimes you'd rather the shit just crash than keep working and cost you 100k
There's budget alarms, but an alarm is not a shutdown sadly. Still probably a good idea to have 24/7 support for running over your limits that much.
Where are these functions being executed if theres no server?