- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Wapo journalist verifies that robotaxis fail to stop for pedestrians in marked crosswalk 7 out of 10 times. Waymo admitted that it follows “social norms” rather than laws.
The reason is likely to compete with Uber, 🤦
Wapo article: https://www.washingtonpost.com/technology/2024/12/30/waymo-pedestrians-robotaxi-crosswalks/
Cross-posted from: https://mastodon.uno/users/rivoluzioneurbanamobilita/statuses/113746178244368036
Looks like the revisionist history podcast might need to revise thier episode about waymo… 😅
this is not on Waymo. it’s on the absolute sold out pieces of shit that allow Waymo and other cunts like Elon to experiment with human lives for money.
I work in a related field to this, so I can try to guess at what’s happening behind the scenes. Initially, most companies had very complicated non-machine learning algorithms (rule-based/hand-engineered) that solved the motion planning problem, i.e. how should a car move given its surroundings and its goal. This essentially means writing what is comparable to either a bunch of if-else statements, or a sort of weighted graph search (there are other ways, of course). This works well for say 95% of cases, but becomes exponentially harder to make work for the remaining 5% of cases (think drunk driver or similar rare or unusual events).
Solving the final 5% was where most turned to machine learning - they were already collecting driving data for training their perception and prediction models, so it’s not difficult at all to just repurpose that data for motion planning.
So when you look at the two kinds of approaches, they have quite distinct advantages over each other. Hand engineered algorithms are very good at obeying rules - if you tell it to wait at a crosswalk or obey precedence at a stop sign, it will do that no matter what. They are not, however, great at situations where there is higher uncertainty/ambiguity. For example, a pedestrian starts crossing the road outside a crosswalk and waits at the median to allow you to pass before continuing on - it’s quite difficult to come up with a one size fits all rule to cover these kinds of situations. Driving is a highly interactive behaviour (lane changes, yielding to pedestrians etc), and rule based methods don’t do so well with this because there is little structure to this problem. Some machine learning based methods on the other hand are quite good at handling these kinds of uncertain situations, and Waymo has invested heavily in building these up. I’m guessing they’re trained with a mixture of human-data + self-play (imitation learning and reinforcement learning), so they may learn some odd/undesirable behaviors. The problem with machine learning models is that they are ultimately a strong heuristic that cannot be trusted to produce a 100% correct answer.
I’m guessing that the way Waymo trains its motion planning model/bias in the data allows it to find some sort of exploit that makes it drive through crosswalks. Usually this kind of thing is solved by creating a hybrid system - a machine learning system underneath, with a rule based system on top as a guard rail.
Some references:
(Apologies for the very long comment, probably the longest one I’ve ever left)
Thanks for taking the time to comment! It’s really informative
It is an offense in Japan to not stop if someone is waiting before entering the crosswalk (and technically to progress until they are fully off the entire street, though I’ve had assholes whip around me for not breaking the law). People do get ticketed for it (though not enough, honestly). I wonder what they would do here.
This is STUPID! I can’t WAIT for President MUSK to ELIMINATE all these Pesky Rules preventing AI Cars from MOWING DOWN CHILDREN In Crosswalks!
The “social norms” line is likely because it was trained using actual driver data. And actual drivers will fail to stop. If it happens enough times in the training data and the system is tuned to favor getting from A to B quickly, then it will inevitably go “well it’s okay to blow through crosswalks sometimes, so why not most of the time instead? It saves me time getting from A to B, which is the primary goal.”
the funniest thing to me, is that this probably isn’t even the fault of AI, this is probably the fault of software developers too lazy to actually write any semi decent code that would do a good job of (not) being a nuisance.
Most developers take pride in what they do and would love to build in all the best features for launch.
But that’s not possible. There’s a deadline and a finite budget for programmers. Ipso facto, a finite number of dev hours.
software developers too lazycompany owners too greedySoftware developers don’t get a say in what gets done or not, profit and cost cutting do.
Ethics is an important component of what every worker should do for a living, but we’re not there yet.
Software developers don’t get a say in what gets done or not, profit and cost cutting do.
i mean that’s true to an extent, but most software development teams are led by a fairly independent group. It’s so abstract you can’t really directly control, ultimately here, there is somebody with some level of authority and knowledge that should know to do better than this, but just isn’t doing it.
Maybe the higher ups are pressuring them, but you can’t push things back forever, and you most certainly can’t pull features forever, there is only so much you can remove before you are left with nothing.
You might have a say in how to implement the requirement, but in this case, if the company decided to follow societal norms and not laws, it’s 100% on the management. You might pin this on devs if they were pressured to release an unfinished product - sometimes the pressure is so big devs are afraid to admit it’s not really done, but in this case, it’s such a crucial part of the project I think it’s one of the first things they worked on.
Realistically, it’s more profitable not to stop - customers are impatient, other drivers too, and pedestrians are used to that. To maximize profit, I’d rather risk some tickets than annoy other drivers or customers.
This hasn’t been true at any of the places I’ve worked.
There’s always been some pressure from management, usually through project managers or business users, for urgency around certain features, timelines, releases, etc. Sometimes you’ll have a buffer of protection from these demands, sometimes not.
One place I worked was so consistently relentless about the dev team’s delivery speed that it was a miserable place to work. There was never time to fix the actual pain points because there were always new features being demanded or emergency fixes required because most code bases were a wreck and falling apart.
Here we go again, blaming robots for doing the same thing humans do. Only at least the robots don’t flip you off when they try to run you over.
Wrong: I do blame humans for making robots do wrong things
Humans aren’t programmed to break the law.
What a bullshit argument. One of the arguments for self driving cars is precisely that they are not doing the same thing humans do. And why should they? It’s ludicrous for a company to train them on “social norms” rather than the actual laws of the road. At least when it comes to black and white issues as what is described in the article.
Guess I should have put the /s on that one. Didn’t make it clear enough.
And again… If I break the law, I get a large fine or go to jail. If companies break the law, they at worst will get a small fine
Why does this disconnect exist?
Am I so crazy to demand that companies are not only treated the same, but held to a higher standard? I don’t stop ar a zebra, that is me breaking the law once. Waymo programming their cars noy to do that is multiple violations per day, every day. Its a company deciding they’re above the law because they want more money. Its a company deciding to risk the lives of others to earn more money.
For me, all managers and engineers that signed off on this and worked on this should he jailed, the company should be restricted from doing business for a month, and required to immediately ensure all laws are followed or else…
This is the only way we get companies to follow the rules.
Instead though, we just ask compi to treat laws as suggestions, sometimes requiring small payments if they cross the line too far.
Funny that you don’t mention company owners or directors who are supposed to oversee what happens, in practice are the people putting pressure to make that happen, and are the ones liable in front of the law.
Why does this disconnect exist?
Because the companies pay the people who make the law.
Stating the obvious here but it’s the sad truth
Speaking as someone who lives and walks in sf daily, they’re still more courteous to pedestrians then drivers and I’d be happy if they replaced human drivers in the city. I’d be happier if we got rid of all the cars but I’ll take getting rid of the psychopaths blowing through intersections.
The reason is likely to compete with Uber, 🤦
A few points of clarity, as I have a family member who’s pretty high up at waymo. First, they don’t want to compete with uber. Waymo isn’t really concerned with driverless cars that you or I would be owning/using, and they don’t want (at this point anyway) to try to start a new taxi service. Right now you order an uber and a waymo car might show up. . They want the commercial side of the equation. How much would uber pay to not have to pay drivers? How much would a shipping company fork over when they can jettison the $75k-150 drivers?
Second, I know for a fact that the upper management was pushing for the cars to drive like this. I can nearly quote said family member opining that if the cars followed all the rules of the road, they wouldn’t perform well, couching it in the language of ‘efficiency.’ It was something like, “being polite creates confusion in other drivers. They expect you to roll through the stop sign or turn right ahead of them even if they have right of way.” So now the waymo cars do the same thing. Yay, “social norms.”
A third point is that, as someone else mentioned, the cars are now trained, not ‘programmed’ with instructions to follow. Said family member spoke of when they switched to the machine learning model, and it was better than the highly complicated (and I’m dumbing down my description because I can’t describe it well) series of if-else statements. With that training comes the issue of the folks in charge of things not knowing exactly what is going on. An issue that was described to me was their cars driving right at the edge of the lane, rather than in the center of it, and they couldn’t figure out why or (at that point, anyway) how to fix it.
As an addendum to that third point, the training data is us, quite literally. They get and/or purchase people’s driving. I think at one time it was actual video, not sure now. So if 90% of drivers blast through at the moment of the red light change if they can, it’s likely you’ll hear about it eventually from waymo. It’s a weakness that ties right into that ‘social norm’ thing. We’re not really training safer driving by having machine drivers, we’re just removing some of the human factors like fatigue or attention deficits. Again, as I get frustrated with the language of said family member (and I’m paraphrasing), ‘how much do we really want to focus on low percentage occurrences? Improving the ‘miles per collision’ is best at the big things.’
Hmmm yeah no surprises there and I like how you articulated it all really well
On the social norm thing, it’s still a conscious decision how much they’re investing in teaching their ai how to distinguish good vs bad behavior. In AI speak, you can totally mark adequate behavior with rewards and bad behavior with penalties. Then you get the car to shift its behavior in the right direction. You can’t predict how it fine tunes specific behavior like the line edge unless you are willing to start from scratch if necessary, but overall that’s how you teach it that crossing a red light is a big no no. Penalties, and if not enough, start over.
Then maybe they should make sure to train them with footage and/or data of drivers who are following the traffic laws instead of just whatever drivers they happen to have data from.
Do they review all this training data to make sure data from people driving recklessly is not being included? If so, how? What process do they use to do that?
A third point is that, as someone else mentioned, the cars are now trained, not ‘programmed’ with instructions to follow.
As an addendum to that third point, the training data is us, quite literally.
Yeah, that makes sense. I was in SF a few months ago, and I was impressed with how the Waymos drove–not so much the driving quality (which seemed remarkably average) but how lifelike they drove. They still seemed generally safer than the human-driven cars.
Improving the ‘miles per collision’ is best at the big things.
Given the nature of reinforcement learning algorithms, this attitude actually works pretty well. Obviously, it’s not perfect, and the company should really program in some guardrails to override the decision algorithm if it makes an egregiously poor decision (like y’know, not stopping at crosswalks for pedestrians) but it’s actually not as bad or ghoulish as it sounds.
but it’s actually not as bad or ghoulish as it sounds
We’ll have to agree to disagree on that one. I think decisions made solely for making the company’s cost as low as possible while actively choosing to not care about issues just because their chance is low (we’ve all seen fight club, right? [If A > B where B=cost of paying out * chance of occurrence and A=cost of recall, no recall]) even if devastating are ghoulish.
I remember seeing a video from inside a waymo waiting to make a left against traffic.
It turned the wheel before moving, in anticipation of the turn. Which is normal for most drivers I see on the road.
It’s also the exact opposite of what you should do for safety and legality.
Keep the wheel straight until you’re ready to move, turning the wheel before the turn means that if someone rear ends you, you get pushed into traffic, not along your current lane.
It’s the social norm, not the proper move.
On a similar note I’ve noticed the waymos don’t start there turns when there’s a pedestrian in the crosswalk, whereas I see drivers do that very often.
I was involved in a crash many years ago where this resulted in the car in front of us getting pushed into an oncoming car. We were stopped behind a car indicating to turn, hit from behind by a bus (going quite fast), pushed hard into the car in front and they ended up getting smashed from behind and in front.
Don’t turn your wheel until you’re ready to move, folks.
I haven’t driven in a bit, so thank you for that reminder. It’s scary that people instinctively do that.
Can’t be good for your car either to be turning the wheels while stopped.
Are the developers Swedish?
Where can I buy a traffic cone shaped rock?
How do you admit to intentionally ignoring traffic laws and not get instantly shutdown by the NTSB?
Because you’re running a business.