McLaren Life banner
1121 - 1140 of 1162 Posts

·
Registered
Joined
·
142 Posts
Your faith in human nature is enthusiastic but misplaced. When the choice comes between greater efficiency in transportation by adjusting speeds and driving patterns or spending billions to build new larger road, people will choose the money over the lives. Every. Time.

Have you seen traffic around LA or NY ? If you told them that all traffic jams could be eliminated, for a 1:ten million chance they become a human sacrifice, they will absolutely choose human sacrifices.
Are traffic jams caused by low speed limits? If not then I don't think we are talking about the same thing.
 

·
Owner
Joined
·
1,138 Posts
I'll tell you who's going to have something like that...............
Every kit car builder that can get their hands on a wrecked Plaid (of which there will likely be a few).
That is 800 hp of two completely independent motors in one easily integrated package. Apply cooling, power and some CAN bus traffic and you have a fully controllable torque vectoring monster of a power unit with very little work.

I am surprised that no one has jumped on the dramatic increase of mechanical grip for the cars. My money is still on a significant change in roll center which will present a challenge for anyone wanting to integrate the power module into another application.
 

·
Owner
Joined
·
1,138 Posts
I came across this with no attribution so I can not vouch for accuracy but, if it is correct, it speaks volumes for what Tesla has achieved. This is especially true when you look that the weight to skid pad cross over for the M5 CS and the Plaid.

Rectangle Triangle Slope Font Line
 

·
Registered
Joined
·
142 Posts
I have never found skid-pad numbers to be very revealing. If you look at minimum cornering speeds (at the Nurburgring, for example) the Plaid is about the same as the Taycan and worse than the M5 CS. The same story when the Plaid was put against the M5 CS and the CT5-V Blackwing by Hagerty - the other two cars were faster in the corners but the plaid was faster on straights, leading to it being slightly faster overall.
 

·
Owner
Joined
·
1,138 Posts
I have never found skid-pad numbers to be very revealing. If you look at minimum cornering speeds (at the Nurburgring, for example) the Plaid is about the same as the Taycan and worse than the M5 CS. The same story when the Plaid was put against the M5 CS and the CT5-V Blackwing by Hagerty - the other two cars were faster in the corners but the plaid was faster on straights, leading to it being slightly faster overall.
A bit like Ferrari who famously liked their cars to handle well in a straight line.
 

·
2012 MP4-12C
Joined
·
9,746 Posts
Discussion Starter · #1,127 ·

If this is true this is tectonic news! Level 3 autonomy is real deal. I suspect the makers are playing a bit fast and loose with labels. I hope this is true because it will be an amazing accomplishment. Level 3 requires basically no human intervention.

Product Font Software Screenshot Multimedia


Elon/Tesla are still testing level 2 automation, and hoping for release sometime "very soon now" (translated: who knows). Quite a black eye to Tesla to have Mercedes blow past them.
 

·
Owner
Joined
·
1,138 Posts
I've watched the AI Day videos Tesla put out. They keep trying different things, learning and resetting the approach. Sounds like typical hard core engineering spaghetti against the wall work. Sooner or later, something will stick.

As for the foundations of the approach, it would seem spot on that (1) you can not rely on maps as the instant something changes, the map goes from useful to deadly and (2) we all use vision to drive so, duh, it does work. It's our CPUs that are lacking, not our image processing (for the most part, some continue to be completely enable to see me on a motorcycle). The question for me is can Elon and co solve the problem within an economically viable period. NN may not be the ultimate answer. If that is the case, they are over invested and will find it hard to change course. He has done well with hard problems where others have simply not tried and, in so doing, has forced others to up their game (as pointed out above). I say it is still more likely than not.

As for like/dislike beers/no beers, Elon displays a lot of the traits of highly successful people. They tend to lack skills in other areas or simply do not value those skills in the first place. As they get more successful, they believe in themselves ever more and start to stray out of their lane thinking their big brain applies equally to all situations (like politics). I'd pass on the beer personally.

Lastly, I'll channel my inner T. Swift and say people gotta hate. He is visible and thus will attract attention. It is in human nature to attack. The dude has done things none of the rest of us have. I really enjoy the irony of the financial types evaluating Elon's success as if they had any clue about what it takes to actually create value as opposed to skimming it. Funny world but the laws of Physics still work which tells me we will eventually come back around to sanity. The question is what will be left of what others before us built by the time we return to our senses?
 

·
Registered
Joined
·
1,850 Posts
(2) we all use vision to drive so, duh, it does work.
It is always going to be more accurate to map 3D space with lasers, actually measuring distance and velocity, than it is to deduce them from 2D camera images. That's before we even consider night time driving, and other non-ideal conditions.

The aim is to make autonomous cars that are safer than with (good) human drivers, not match the average driver, in some conditions. I suspect that Tesla will have to settle for something in between.

Whether we admit it or not, we do (at times) drive without enough information to be certain of safety. This includes when we can't really see far enough (especially at night in unlit areas), when the sun is low, when rain and sun hide road markings, in dense traffic, when we don't scan other directions enough, etc. Cars that can "see" simultaneously in all directions, and all (/ more) lighting and weather conditions, should be able to be a lot safer.
It's our CPUs that are lacking, not our image processing (for the most part, some continue to be completely enable to see me on a motorcycle).
Optical illusions.
The question for me is can Elon and co solve the problem within an economically viable period.
They'll probably get a system that can drive well enough to be viable eventually, but it will be limited by using cameras, so others will make better performing (but more expensive) systems.
NN may not be the ultimate answer.
They concern me, although I don't know enough about how much they are relied upon to know how worried I should be.

If that is the case, they are over invested and will find it hard to change course. He has done well with hard problems where others have simply not tried and, in so doing, has forced others to up their game (as pointed out above). I say it is still more likely than not.

As for like/dislike beers/no beers, Elon displays a lot of the traits of highly successful people. They tend to lack skills in other areas or simply do not value those skills in the first place. As they get more successful, they believe in themselves ever more and start to stray out of their lane thinking their big brain applies equally to all situations (like politics).
Agreed.
I'd pass on the beer personally.
I was going to invite you too ;)
Lastly, I'll channel my inner T. Swift and say people gotta hate. He is visible and thus will attract attention. It is in human nature to attack. The dude has done things none of the rest of us have. I really enjoy the irony of the financial types evaluating Elon's success as if they had any clue about what it takes to actually create value as opposed to skimming it.
Yep. He can be a bit of a twat, but he's making sci-fi happen. I want to see more of that before I time-out.
 

·
Premium Member
Joined
·
571 Posts
It is always going to be more accurate to map 3D space with lasers, actually measuring distance and velocity, than it is to deduce them from 2D camera images. That's before we even consider night time driving, and other non-ideal conditions.
+1000

the cpus we can fit into a 5000# space with active cooling systems and 100kWh of energy are fantastical. It’s not the cpus that are limited, but the AI software techniques. Turning camera images into a real time tactical action plan better and faster than a human, or frankly even a dog, is almost a boil the ocean approach. It’s basically the same as “invent skynet”. Just turning pixels into structures is tough. But “don’t hit the object” begs the question “what is an object ?” Tesla is trying to avoid teaching a computer philosophy by training it to memorize trillions of conditions and the appropriate response without understanding anything. But rain is different from fog is different from dusk is different from snow …. any deviation from a previous memorized pattern is a fail.

Biological systems do NOT work this way, so actually there isn’t any precedent that just vision will work. If I teach you to drive at dawn, I don’t need to teach you again to drive at dusk. If I teach you to drive in low traction conditions, I don’t need to enumerate every different circumstance.
 

·
Registered
Joined
·
4,320 Posts
+1000

the cpus we can fit into a 5000# space with active cooling systems and 100kWh of energy are fantastical. It’s not the cpus that are limited, but the AI software techniques. Turning camera images into a real time tactical action plan better and faster than a human, or frankly even a dog, is almost a boil the ocean approach. It’s basically the same as “invent skynet”. Just turning pixels into structures is tough. But “don’t hit the object” begs the question “what is an object ?” Tesla is trying to avoid teaching a computer philosophy by training it to memorize trillions of conditions and the appropriate response without understanding anything. But rain is different from fog is different from dusk is different from snow …. any deviation from a previous memorized pattern is a fail.

Biological systems do NOT work this way, so actually there isn’t any precedent that just vision will work. If I teach you to drive at dawn, I don’t need to teach you again to drive at dusk. If I teach you to drive in low traction conditions, I don’t need to enumerate every different circumstance.
Right. Quantum processors will be a game changer in this respect.
 

·
Owner
Joined
·
1,138 Posts
+1000

the cpus we can fit into a 5000# space with active cooling systems and 100kWh of energy are fantastical. It’s not the cpus that are limited, but the AI software techniques. Turning camera images into a real time tactical action plan better and faster than a human, or frankly even a dog, is almost a boil the ocean approach. It’s basically the same as “invent skynet”. Just turning pixels into structures is tough. But “don’t hit the object” begs the question “what is an object ?” Tesla is trying to avoid teaching a computer philosophy by training it to memorize trillions of conditions and the appropriate response without understanding anything. But rain is different from fog is different from dusk is different from snow …. any deviation from a previous memorized pattern is a fail.

Biological systems do NOT work this way, so actually there isn’t any precedent that just vision will work. If I teach you to drive at dawn, I don’t need to teach you again to drive at dusk. If I teach you to drive in low traction conditions, I don’t need to enumerate every different circumstance.

Not wanting to debate the lay of the land here but Neural Networks are the AI implementation of "drive at dawn, I don't need to teach you again to drive at dusk". They are aiming to achieve almost exactly what you are describing.

There is a point in the latest Tesla AI video where the car evaluates the possible paths of several different vehicles in a difficult situation. It can do it faster and more consistently than a person can. It does not get bored, sleepy or lazy so, if you can make it smart enough, it will be better simply by being on task all the time.

That is one very big IF with one very big pay off if they succeed.

All that has been said about the usefulness of mapped roads, IR, Lidar, ultra sonic, radar and other aids is spot on. My point was that, if you can achieve the goal with bi-optic cameras (for depth perception like us), you have an incredibly cost effective solution and you have generated an image recognition engine that has uses well beyond just driving a car. You also have way more "eyes" than a human in that there are cameras pointing in all directions working simultaneously.

Again, the very big IF.
 

·
2012 MP4-12C
Joined
·
9,746 Posts
Discussion Starter · #1,140 ·
Not wanting to debate the lay of the land here but Neural Networks are the AI implementation of "drive at dawn, I don't need to teach you again to drive at dusk". They are aiming to achieve almost exactly what you are describing.

There is a point in the latest Tesla AI video where the car evaluates the possible paths of several different vehicles in a difficult situation. It can do it faster and more consistently than a person can. It does not get bored, sleepy or lazy so, if you can make it smart enough, it will be better simply by being on task all the time.

That is one very big IF with one very big pay off if they succeed.

All that has been said about the usefulness of mapped roads, IR, Lidar, ultra sonic, radar and other aids is spot on. My point was that, if you can achieve the goal with bi-optic cameras (for depth perception like us), you have an incredibly cost effective solution and you have generated an image recognition engine that has uses well beyond just driving a car. You also have way more "eyes" than a human in that there are cameras pointing in all directions working simultaneously.

Again, the very big IF.
if you understand how his software works, and I won’t pretend to understand it fully, but basically he uses optical input, a video stream, to generate a 3-D map in real time. As things get occluded and come back into a view, he can discern what the objects are and it builds a real time map and gives them trajectories.

A lot of the machine learning comes in and trying to discern what those objects are and label them. Overtime that gets better and better. And from the labels then you could discern a lot of behavior. If it looks like a human being but it’s really short, the chances of that being a kid and running out at a random direction between cars increases greatly .

So a lot of the benefit for LiDAR is having those depth maps, but think of the processing involved here. That’s all being generated in real time. So now you have another input stream, and that input stream is just as constant is the video stream, so you have to then render that, and then figure out what parts of that are adding any value, and you’re probably throwing the vast majority of it away.

And it probably will help disambiguate ambiguous scenarios a bit, the cost is another stream, and then the real time difficulty of syncing up those to stream models for the instances in places where disambiguation is useful, and then there’s extra processing overhead to make that disambiguation. At some point you will reach a law of diminishing returns on what it can disambiguate, yet the processing overhead is still huge for it.

So you have to than basically slow up that rendering for that disambiguation of another stream . And I think that’s where some of the difficulty comes in with LiDAR.
that said I’m pretty sure there are some circumstances where the lighter would be pretty handy. For example if you’re in a complete fog, having something that can penetrate and see better, would certainly help. Anyway I’m not gonna second-guess when it’s not my forte, it’s easy to be a Monday morning quarterback on these things. I’m sure a lot of really bright people are working really hard to figure out what’s the best thing to do.

anyway the bottom line is building a real time 3-D model where you have labels and vectors to me is a clearly great way to do it. Because with Mercedes there were lying on maps of a road that could be old and out of date. You can’t trust that over what you see, and hear this is generating the road from what it sees. It’s not even a contest, the Mercedes solution is idiotic if it’s truly based on stored maps.

don’t get me wrong, using stored maps is just one more Q to help disambiguate things, but in the end you have to believe what you see, not some old stored record.
 
1121 - 1140 of 1162 Posts
Top