Translate

Thursday, October 22, 2020

Making Use Of AI Ethics Tuning Knobs In AI Autonomous Cars 

By Lance Eliot, the AI Trends Insider  

There is increasing awareness about the importance of AI Ethics, consisting of being mindful of the ethical ramifications of AI systems.   

AI developers are being asked to carefully design and build their AI mechanizations by ensuring that ethical considerations are at the forefront of the AI systems development process. When fielding AI, those responsible for the operational use of the AI also need to be considering crucial ethical facets of the in-production AI systems. Meanwhile, the public and those using or reliant upon AI systems are starting to clamor for heightened attention to the ethical and unethical practices and capacities of AI.   

Consider a simple example. Suppose an AI application is developed to assess car loan applicants. Using Machine Learning (ML) and Deep Learning (DL), the AI system is trained on a trove of data and arrives at some means of choosing among those that it deems are loan worthy and those that are not. 

The underlying Artificial Neural Network (ANN) is so computationally complex that there are no apparent means to interpret how it arrives at the decisions being rendered. Also, there is no built-in explainability capability and thus the AI is unable to articulate why it is making the choices that it is undertaking (note: there is a movement toward including XAI, explainable AI components to try and overcome this inscrutability hurdle).   

Upon the AI-based loan assessment application being fielded, soon thereafter protests arose by some that assert they were turned down for their car loan due to an improper inclusion of race or gender as a key factor in rendering the negative decision.   

At first, the maker of the AI application insists that they did not utilize such factors and professes complete innocence in the matter. Turns out though that a third-party audit of the AI application reveals that the ML/DL is indeed using race and gender as core characteristics in the car loan assessment process. Deep within the mathematically arcane elements of the neural network, data related to race and gender were intricately woven into the calculations, having been dug out of the initial training dataset provided when the ANN was crafted. 

That is an example of how biases can be hidden within an AI system. And it also showcases that such biases can go otherwise undetected, including that the developers of the AI did not realize that the biases existed and were seemingly confident that they had not done anything to warrant such biases being included. 

People affected by the AI application might not realize they are being subjected to such biases. In this example, those being adversely impacted perchance noticed and voiced their concerns, but we are apt to witness a lot of AI that no one will realize they are being subjugated to biases and therefore not able to ring the bell of dismay.   

Various AI Ethics principles are being proffered by a wide range of groups and associations, hoping that those crafting AI will take seriously the need to consider embracing AI ethical considerations throughout the life cycle of designing, building, testing, and fielding AI.   

AI Ethics typically consists of these key principles: 

1)      Inclusive growth, sustainable development, and well-being 

2)      Human-centered values and fairness 

3)      Transparency and explainability 

4)      Robustness, security, and safety 

5)      Accountability   

We certainly expect humans to exhibit ethical behavior, and thus it seems fitting that we would expect ethical behavior from AI too.   

Since the aspirational goal of AI is to provide machines that are the equivalent of human intelligence, being able to presumably embody the same range of cognitive capabilities that humans do, this perhaps suggests that we will only be able to achieve the vaunted goal of AI by including some form of ethics-related component or capacity. 

What this means is that if humans encapsulate ethics, which they seem to do, and if AI is trying to achieve what humans are and do, the AI ought to have an infused ethics capability else it would be something less than the desired goal of achieving human intelligence.   

You could claim that anyone crafting AI that does not include an ethics facility is undercutting what should be a crucial and integral aspect of any AI system worth its salt. 

Of course, trying to achieve the goals of AI is one matter, meanwhile, since we are going to be mired in a world with AI, for our safety and well-being as humans we would rightfully be arguing that AI had better darned abide by ethical behavior, however that might be so achieved.   

Now that we’ve covered that aspect, let’s take a moment to ponder the nature of ethics and ethical behavior.  

Considering Whether Humans Always Behave Ethically   

Do humans always behave ethically? I think we can all readily agree that humans do not necessarily always behave in a strictly ethical manner.   

Is ethical behavior by humans able to be characterized solely by whether someone is in an ethically binary state of being, namely either purely ethical versus being wholly unethical? I would dare say that we cannot always pin down human behavior into two binary-based and mutually exclusive buckets of being ethical or being unethical. The real-world is often much grayer than that, and we at times are more likely to assess that someone is doing something ethically questionable, but it is not purely unethical, nor fully ethical. 

In a sense, you could assert that human behavior ranges on a spectrum of ethics, at times being fully ethical and ranging toward the bottom of the scale as being wholly and inarguably unethical. In-between there is a lot of room for how someone ethically behaves. 

If you agree that the world is not a binary ethical choice of behaviors that fit only into truly ethical versus solely unethical, you would therefore also presumably be amenable to the notion that there is a potential scale upon which we might be able to rate ethical behavior. 

This scale might be from the scores of 1 to 10, or maybe 1 to 100, or whatever numbering we might wish to try and assign, maybe even including negative numbers too. 

Let’s assume for the moment that we will use the positive numbers of a 1 to 10 scale for increasingly being ethical (the topmost is 10), and the scores of -1 to -10 for being unethical (the -10 is the least ethical or in other words most unethical potential rating), and zero will be the midpoint of the scale. 

Please do not get hung up on the scale numbering, which can be anything else that you might like. We could even use letters of the alphabet or any kind of sliding scale. The point being made is that there is a scale, and we could devise some means to establish a suitable scale for use in these matters.   

The twist is about to come, so hold onto your hat.   

We could observe a human and rate their ethical behavior on particular aspects of what they do. Maybe at work, a person gets an 8 for being ethically observant, while perhaps at home they are a more devious person, and they get a -5 score. 

Okay, so we can rate human behavior. Could we drive or guide human behavior by the use of the scale? 

Suppose we tell someone that at work they are being observed and their target goal is to hit an ethics score of 9 for their first year with the company. Presumably, they will undertake their work activities in such a way that it helps them to achieve that score.   

In that sense, yes, we can potentially guide or prod human behavior by providing targets related to ethical expectations. I told you a twist was going to arise, and now here it is. For AI, we could use an ethical rating or score to try and assess how ethically proficient the AI is.   

In that manner, we might be more comfortable using that particular AI if we knew that it had a reputable ethical score. And we could also presumably seek to guide or drive the AI toward an ethical score too, similar to how this can be done with humans, and perhaps indicate that the AI should be striving towards some upper bound on the ethics scale. 

Some pundits immediately recoil at this notion. They argue that AI should always be a +10 (using the scale that I’ve laid out herein). Anything less than a top ten is an abomination and the AI ought to not exist. Well, this takes us back into the earlier discussion about whether ethical behavior is in a binary state.   

Are we going to hold AI to a “higher bar” than humans by insisting that AI always be “perfectly” ethical and nothing less so?   

This is somewhat of a quandary due to the point that AI overall is presumably aiming to be the equivalent of human intelligence, and yet we do not hold humans to that same standard. 

For some, they fervently believe that AI must be held to a higher standard than humans. We must not accept or allow any AI that cannot do so. 

Others indicate that this seems to fly in the face of what is known about human behavior and begs the question of whether AI can be attained if it must do something that humans cannot attain.   

Furthermore, they might argue that forcing AI to do something that humans do not undertake is now veering away from the assumed goal of arriving at the equivalent of human intelligence, which might bump us away from being able to do so as a result of this insistence about ethics.   

Round and round these debates continue to go. 

Those on the must-be topnotch ethical AI are often quick to point out that by allowing AI to be anything less than a top ten, you are opening Pandora’s box. For example, it could be that AI dips down into the negative numbers and sits at a -4, or worse too it digresses to become miserably and fully unethical at a dismal -10. 

Anyway, this is a debate that is going to continue and not be readily resolved, so let’s move on. 

If you are still of the notion that ethics exists on a scale and that AI might also be measured by such a scale, and if you also are willing to accept that behavior can be driven or guided by offering where to reside on the scale, the time is ripe to bring up tuning knobs. Ethics tuning knobs. 

Here’s how that works. You come in contact with an AI system and are interacting with it. The AI presents you with an ethics tuning knob, showcasing a scale akin to our ethics scale earlier proposed. Suppose the knob is currently at a 6, but you want the AI to be acting more aligned with an 8, so you turn the knob upward to the 8. At that juncture, the AI adjusts its behavior so that ethically it is exhibiting an 8-score level of ethical compliance rather than the earlier setting of a 6. 

What do you think of that? 

Some would bellow out balderdash, hogwash, and just unadulterated nonsense. A preposterous idea or is it genius? You’ll find that there are experts on both sides of that coin. Perhaps it might be helpful to provide the ethics tuning knob within a contextual exemplar to highlight how it might come to play. 

Here’s a handy contextual indication for you: Will AI-based true self-driving cars potentially contain an ethics tuning knob for use by riders or passengers that use self-driving vehicles?   

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).   

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Ethics Tuning Knobs 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.   

This seems rather straightforward. You might be wondering where any semblance of ethics behavior enters the picture. Here’s how. Some believe that a self-driving car should always strictly obey the speed limit. 

Imagine that you have just gotten into a self-driving car in the morning and it turns out that you are possibly going to be late getting to work. Your boss is a stickler and has told you that coming in late is a surefire way to get fired.   

You tell the AI via its Natural Language Processing (NLP) that the destination is your work address. 

And, you ask the AI to hit the gas, push the pedal to the metal, screech those tires, and get you to work on-time.

But it is clear cut that if the AI obeys the speed limit, there is absolutely no chance of arriving at work on-time, and since the AI is only and always going to go at or less than the speed limit, your goose is fried.   

Better luck at your next job.   

Whoa, suppose the AI driving system had an ethics tuning knob. 

Abiding strictly by the speed limit occurs when the knob is cranked up to the top numbers like say 9 and 10. 

You turn the knob down to a 5 and tell the AI that you need to rush to work, even if it means going over the speed limit, which at a score of 5 it means that the AI driving system will mildly exceed the speed limit, though not in places like school zones, and only when the traffic situation seems to allow for safely going faster than the speed limit by a smidgen.   

The AI self-driving car gets you to work on-time!   

Later that night, when heading home, you are not in as much of a rush, so you put the knob back to the 9 or 10 that it earlier was set at. 

Also, you have a child-lock on the knob, such that when your kids use the self-driving car, which they can do on their own since there isn’t a human driver needed, the knob is always set at the topmost of the scale and the children cannot alter it.   

How does that seem to you? 

Some self-driving car pundits find the concept of such a tuning knob to be repugnant. 

They point out that everyone will “cheat” and put the knob on the lower scores that will allow the AI to do the same kind of shoddy and dangerous driving that humans do today. Whatever we might have otherwise gained by having self-driving cars, such as the hoped-for reduction in car crashes, along with the reduction in associated injuries and fatalities, will be lost due to the tuning knob capability.   

Others though point out that it is ridiculous to think that people will put up with self-driving cars that are restricted drivers that never bend or break the law. 

You’ll end-up with people opting to rarely use self-driving cars and will instead drive their human-driven cars. This is because they know that they can drive more fluidly and won’t be stuck inside a self-driving car that drives like some scaredy-cat. 

As you might imagine, the ethical ramifications of an ethics tuning knob are immense. 

In this use case, there is a kind of obviousness about the impacts of what an ethics tuning knob foretells.   

Other kinds of AI systems will have their semblance of what an ethics tuning knob might portend, and though it might not be as readily apparent as the case of self-driving cars, there is potentially as much at stake in some of those other AI systems too (which, like a self-driving car, might entail life-or-death repercussions).   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Conclusion   

If you really want to get someone going about the ethics tuning knob topic, bring up the allied matter of the Trolley Problem.   

The Trolley Problem is a famous thought experiment involving having to make choices about saving lives and which path you might choose. This has been repeatedly brought up in the context of self-driving cars and garnered acrimonious attention along with rather diametrically opposing views on whether it is relevant or not. 

In any case, the big overarching questions are will we expect AI to have an ethics tuning knob, and if so, what will it do and how will it be used. 

Those that insist there is no cause to have any such device are apt to equally insist that we must have AI that is only and always practicing the utmost of ethical behavior. 

Is that a Utopian perspective or can it be achieved in the real world as we know it?   

Only my crystal ball can say for sure.  

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 



from AI Trends https://ift.tt/37zf9mw
via A.I .Kung Fu

Application of AI to IT Service Ops by IBM and ServiceNow Exemplifies a Trend 

By John P. Desmond, AI Trends Editor 

The application of AI to IT service operations has the potential to automate many tasks and drive down the cost of operations. 

The trend is exemplified by the recent agreement between IBM and ServiceNow to leverage IBM’s AI-powered cloud infrastructure with ServiceNow’s intelligent workflow systems, as reported in Forbes. 

The goal is to reduce resolution times and lower the cost of outages, which according to a recent report from Aberdeen, can cost a company $260,000 per hour.  

David Parsons, Senior Vice President of Global Alliances and Partner Ecosystem at ServiceNow

“Digital transformation is no longer optional for anyone, and AI and digital workflows are the way forward,” stated David Parsons, Senior Vice President of Global Alliances and Partner Ecosystem at ServiceNow. “The four keys to success with AI are the ability 1) to automate IT, 2) gain deeper insights, 3) reduce risks, and 4) lower costs across your business,” Parsons said.   

The two companies plan to combine their tools in customer engagement to address each of these factors. “The first phase will bring together IBM’s AIOps software and professional services with ServiceNow’s intelligent workflow capabilities to help companies meet the digital demands of this moment,” Parsons stated. 

Arvind Krishna, Chief Executive Officer of IBM stated in a press release on the announcement, “AI is one of the biggest forces driving change in the IT industry to the extent that every company is swiftly becoming an AI company.” ServiceNow’s cloud computing platform helps companies manage digital workflows for enterprise IT operations.  

By partnering with ServiceNow and their market leading Now Platform, clients will be able to use AI to quickly mitigate unforeseen IT incident costs. “Watson AIOps with ServiceNow’s Now Platform is a powerful new way for clients to use automation to transform their IT operations and mitigate unforeseen IT incident costs,” Krishna stated. 

The IT service offering squarely positions IBM at aiming for AI in business. “When we talk about AI, we mean AI for business, which is much different than consumer AI,” stated Michael Gilfix of IBM in the Forbes account. He is the Vice President of Cloud Integration and Chief Product Officer of Cloud Paks at IBM. “AI for business is all about enabling organizations to predict outcomes, optimize resources, and automate processes so humans can focus their time on things that really matter,” he stated.   

IBM Watson has handled more than 30,000 client engagements since inception in 2011, the company reports. Among the benefits of this experience is a vast natural language processing vocabulary, which can parse and understand huge amounts of unstructured data. 

Ericsson Scientists Develop AI System to Automatically Resolve Trouble Tickets 

Another experience involving AI in operations comes from two AI scientists with Ericsson, who have developed a machine learning algorithm to help application service providers manage and automatically resolve trouble tickets. 

Wenting Sun, senior data science manager, Ericsson

Wenting Sun, senior data science manager at Ericsson in San Francisco, and Alka Isac, data scientist in Ericsson’s Global AI Accelerator outside Boston, devised the system to help quickly resolve issues with the complex infrastructure of an application service provider, according to an account on the Ericsson BlogThese could be network connection response problems, infrastructure resource limitations, or software malfunctioning issues. 

The two sought to use advanced NLP algorithms to analyze text information, interpret human language and derive predictions. They also took advantage of features/weights discovered from a group of trained models. Their system uses a hybrid of an unsupervised clustering approach and supervised deep learning embedding. “Multiple optimized models are then ensembled to build the recommendation engine,” the authors state.  

The two describe current trouble ticket handling approaches as time-consuming, tedious, labor-intensive, repetitive, slow, and prone to error. Incorrect triaging often results, which can lead to a reopening of a ticket and more time to resolve, making for unhappy customers. When personnel turns over, the human knowledge gained from years of experience can be lost.  

Alka Isac, data scientist in Ericsson’s Global AI Accelerator

We can replace the tedious and time-consuming triaging process with intelligent recommendations and an AI-assisted approach,” the authors stated, with a time to resolution expected to be reduced up to 75% and avoidance of multiple ticket reopenings  

Sun leads a team of data scientists and data engineers to develop AI/ML applications in the telecommunication domain. She holds a bachelor’s degree in electrical and electronics engineering and a PhD degree in intelligent control. She also drives Ericsson’s contributions to the AI open source platform Acumos (under Linux foundation’s Deep Learning Foundation).  

As a Data Scientist in Ericsson’s Global AI Accelerator, Isac is part of a team of Data Scientists focusing on reducing the resolution time of tickets for Ericsson’s Customer Support Team. She holds a master’s degree in Information Systems Management majoring in Data Science. 

Survey Finds AI Is Helpful to IT 

In a survey of 154 IT and business professionals at companies with at least one AI-related project in general production, AI was found to deliver impressive results to IT departments, enhancing the performance of systems and making help desks more helpful, according to a recent account in ZDNet.  

The survey was conducted by ITPro Today working with InformationWeek and Interop. 

Beyond benefits of AI for the overall business, many respondents could foresee the greatest benefits going right to the IT organization itself63% responded that they hope to achieve greater efficiencies within IT operations. Another 45% aimed for improved product support and customer experience, and another 29% sought improved cybersecurity systems.   

The top IT use case was security analytics and predictive intelligence, cited by 71% of AI leaders. Another 56% stated AI is helping with the help desk, while 54% have seen a positive impact on the productivity of their departments. “While critics say that the hype around AI-driven cybersecurity is overblown, clearly, IT departments are desperate to solve their cybersecurity problems, and, judging by this question in our survey, many of them are hoping AI will fill that need,” stated Sue Troy, author of the survey report.   

AI expertise is in short supply. More than two in three successful AI implementers, 67%, report shortages of candidates with needed machine learning and data modeling skills, while 51seek greater data engineering expertise. Another 42% reported compute infrastructure skills to be in short supply.    

Read the source articles and information in Forbes, the IBM press release on the alliance with ServiceNow, on the Ericsson Blog, in ZDNet and from ITPro Today . 



from AI Trends https://ift.tt/2FQDMPV
via A.I .Kung Fu

Testing Finds Automated Driver Assistance Systems to be Unreliable 

By AI Trends Staff  

A European safety assessment rated the Tesla sixth of ten driver assistance systems in its ability to keep drivers engaged, meaning actively engaged in the driving task as automation assists to some degree.   

The Tesla Model 3’s Autopilot scored just 36 when assessed on its ability to maintain a driver’s focus on the road, according to a recent account from Reuters. The Tesla did receive high marks for performance and its ability to respond to emergencies, receiving an overall score of 131 and a rating of ‘moderate’. 

The Mercedes GLE’s system had the highest overall score of 174, the top rating of ‘very good’ and a score of 85 for driver engagement. Most other vehicles had scores of 70 or above for driver engagement. 

The European New Car Assessment Program (NCAP) worked with UK insurance group Thatcham Research to perform the assessment, which they called the first consumer ratings specifically focused on driver assistance systemstechnology that automates some tasks, including acceleration, braking and steering support. 

Safety and insurance researchers have frequently warned of the risks of consumers overestimating the systems’ abilities, a misconception increased by some automakers calling their products Autopilot (Tesla), ProPilot (Nissan) or CoPilot (Ford). (Others are Super Cruise (Cadillac), Drive Pilot (Mercedes Benz), Traffic Jam Pilot (Audi), Active Driving Assistant Professional (BMW), Highway Driving Assist (Kia) and Eyesight (Kia).)   

The US National Transportation Safety Board (NTSB)  has criticized Tesla’s Autopilot for enabling drivers to turn their attention from the road. US regulators have investigated 15 crashes since 2016 involving Tesla vehicles equipped with Autopilot.  

Matthew Avery, a Euro NCAP board member and research director at Thatcham Research

“Unfortunately, there are motorists that believe they can purchase a self-driving car today. This is a dangerous misconception that sees too much control handed to vehicles that are not ready to cope with all situations,” stated Matthew Avery, a Euro NCAP board member and research director at Thatcham Research. 

Europeans Ahead on Testing of Driver Assistance Systems 

The US lags behind Europe in the testing of driver assistance systems, according to a recent account in Claims Journal, serving the insurance industry. The acting head of the US National Highway Traffic Safety Administration (NHTSA) announced recently that the agency would be making changes this year to a testing program that assigns safety grades to vehicles.   

“We’re raising the bar for safety technologies in our new vehicles,” stated acting NHTSA chief James Owens. The agency in December 2015 issued proposed rules for testing procedures that would be similar to more comprehensive testing done by European regulators. But no rules have been put forward since then. The NTSB has criticized NHTSA for its hands-off approach to overseeing driver assistance programs. The NTSB has compared NHTSA’s testing and rating proposals unfavorably to consumer safety systems put in place by European agencies.  

James Owens, acting Chief, National Highway Transportation Safety Administration

Euro NCAP began rating automatic braking systems in 2014. It has been testing the performance of advanced cruise control, lane-centering systems and blind spot detection since 2018. Beginning this May, it began to grade how well a car’s system keeps the driver engaged.  

The group is a non-governmental body but funded by some EU countries and also receives money from national motor clubs and insurers. The group shares testing methods with NHTSA and the NTSB on a regular basis.  

In 2018, EU regulators required the installation of acoustic and visual warning signals for lane-keeping systems every 15 seconds if drivers take their hands off the wheel. As a result, Tesla had to issue a software update to its Autopilot system in the EU. A regulatory body is currently working on rules for more advanced hands-off systems that can control braking, acceleration, and lane changes at speeds of up to 60 km/h (37 mph). 

Under draft EU rules, carmakers among other things need to show how the system safely hands control back to the driver, how the car monitors the road, and how it reacts in emergency situations.  

The US currently has no rules for automated driver assistance systems. Automakers are allowed to self-certify that their vehicles comply with existing rules, according to University of South Carolina law professor Bryant Walker Smith, who focuses on automated driving. 

AAA Testing Finds Automated Driver Assistance Systems to be Unreliable 

A study by the American Automobile Association in the US found driver assistance systems to be unreliable, according to a recent account in Car and Driver 

AAA tested five 2019 and 2020 vehicles equipped with the most advanced technology each automaker had to offer. These included a 2019 BMW X7 with “Active Driving Assistant Professional,” a 2019 Cadillac CT6 with “Super Cruise,” a 2019 Ford Edge with “Ford Co-Pilot360,” a 2020 Kia Telluride with “Highway Driving Assist” and a 2020 Subaru Outback with “EyeSight.” All of these systems are regarded as Level 2 autonomous systems, meaning the driver is expected to remain aware while the system is in use. 

The AAA testing showed that all five vehicles experienced on average one issue—such as the need for the driver to act quickly to keep the vehicle centered in a lane—every eight miles. 

The safety benefits of such systems, the study concluded, are not reliable. The systems become dangerous when drivers over-rely on the technology and do not notice when the systems disengage—which they often do with little notice, AAA noted. Of all the errors that the systems made on open-road testing, 73% involved instances of lane departure or erratic lane position.  

“Manufacturers need to work toward more dependable technology, including improving lane keeping assistance and providing more adequate alerts,” stated Greg Brannon, director of automotive engineering and industry relations at AAA, in a statement. “Active driving assistance systems are designed to assist the driver and help make the roads safer, but the fact is, these systems are in the early stages of their development.”  

In the AAA study, the Cadillac CT6 experienced the fewest number of issues over the roughly 800 miles the vehicles each traveled, followed by the BMW X7, Subaru Outback, Kia Telluride, and Ford Edge. On the closed course portion of the test, the vehicles had difficulty when approaching a simulated disable vehicle, with a collision occurring two-thirds of the time. 

“We know human error contributes to 94% of all crashes, which is why we are focused on advancing driver assist technologies that can help significantly enhance safety,” stated Wade Newton, the VP of communications at the Alliance for Automotive Innovation, to Car and Driver. “However, as we integrate these increasingly advanced driver assistance features into more vehicles, it is critical that drivers fully understand the system’s capabilities and limitations as well as their responsibilities.” 

Read the source articles from  Reutersin Claims Journal and Car and Driver. 



from AI Trends https://ift.tt/3oizhiv
via A.I .Kung Fu

How  Veterans Would Study Machine Learning If He Had to Start Today 

By AI Trends Staff 

How one gets educated for AI continues to be an area worth exploring with many options available. Charting one’s career as a member of a newly-formed team working to leverage AI to help the business is best met with creativity and patience.  

It’s as much a mission to find out how organizations are setting up for AI development as it is about finding out what you really want to do. The experience of one now-veteran machine modeler could be timely guidance for many in this context.  

Daniel Bourke, machine learning engineer and instructor

Daniel Bourke is an entrepreneur running a YouTube site and writing about technology. He worked as a machine learning engineer at a company in Brisbane, Australia, for several years. He helped to qualify himself with a nanodegree in Artificial Intelligence and Deep Learning Foundations from Udacity, and a Deep Learning course from Coursera, according to his LinkedIn page. He also taught code to young people, created an AI chatbot named MoveMore to encourage activity, and worked as an Uber driver.   

Today he teaches a machine learning course aimed at beginners to over 30,000 students. Writing about the experience of his last three years in a recent account in TheNextWeb, he offered some advice for anyone starting out today seeking a career in AI and machine learning. “Due to several failures, I took five years to do a three-year degree,” he stated. “So as it stands, I feel like I’ve done a machine learning undergraduate degree.”  

People might get the impression Bourke is an expert now. “I know a lot more than I started but I also know how much I don’t know,” he stated 

His advice on online courses: “They’re all remixes of the same thing. Instead of worrying about which course is better than another, find a teacher who excites you. Learning anything is 10% material and 90% being excited to learn.”  

He suggests learning software engineering before machine learning, “Because machine learning is an infrastructure problem (infrastructure means all the things which go around your model so others can use it, the hot new term you’ll want to look up is MLOps). And deployment, as in getting your models into the hands of others, is hard. But that’s exactly why I should’ve spent more time there.” [Ed. Note: MLOps refers to machine learning operations, a practice for collaboration between data scientists and operations professionals to help manage production ML.]  

“If I was starting again today, I’d find a way to deploy every semi-decent model I build (with exceptions for the dozens of experiments leading to the one worth sharing).”  

Here is how to do it: “Train a model, build a front-end application around it with Streamlit, get the application working locally (on your computer), once it’s working wrap the application with Docker, then deploy the Docker container to Heroku or another cloud provider.” 

Deploying models enables you to learn things you may not otherwise consider. It allows you to answer these questions:   

  • “How long does inference take (the time for your model to make a prediction)? 
  • How do people interact with it (maybe the data they send to your image classifier is different to your test set, data in the real world changes often)? 
  • Would someone actually use this?”

Courses help to build foundation skills; experience helps you to remember them, he suggests, noting that he ordered the book Mathematics for Machine Learning and planned to read it cover to cover. Learn more at Daniel Bourke’s website. 

Microsoft, Udacity Collaborate on ML for Azure Training 

In other machine learning education news, Microsoft and Udacity recently announced they have joined forces to launch a machine learning (ML) engineer training program focused on training, validating, and deploying models using the Azure Suite. The program is open to students with minimal coding experience and will focus on using Azure automated ML, according to an account in InfoQ.  

The Nanodegree program gives students the opportunity to enhance their technical skills in ML; students build models, manage ML pipelines, tweak the models to improve performance, and operationalize the models using MLOps best practices.   

The course runs remotely. Support is provided by technical mentors to help students clear roadblocks. Career coaches engage in one-on-one calls to help students improve their resumes, LinkedIn profiles and GitHub repositories.  

Gabriel Dalporto, the CEO of Udacity

Gabriel Dalporto, the CEO of Udacity, stated at the launch event, “New-age technologies such as AI and ML will govern the future of businesses. Organizations have fast-forwarded their steps for hiring the best talent that can bring them a competitive edge in the market. We have developed this program in collaboration with Microsoft to offer a deep dive into the world of ML to learners. We believe that our approach will empower our students to have long and successful careers.”  

Engineer Suggests Focusing on a Language, Selecting an Environment 

Another set of suggestions for how to start learning in AI, coming from software engineer Omar Rabbolini writing in gitconnectedrecommends beginners focus on two of the most popular frameworks for AI and ML, Torch and Tensorflow. From Facebook and Google respectively, the two frameworks are used all over the industry to build, train, and run deep learning networks to enable image recognition, speech synthesis, and other technologies. Rabbolini has 20 years of experience and concentrated on mentoring, writing, and content creation. (Learn more about Omar Rabbolini.)  

For a language, he recommends learning Python, which he refers to as the “de facto standard for AI development.” Its advantages include an ample supply of online learning material, an easy-to-learn syntax, and many available libraries for data manipulation and data display. 

He recommends Jupyter notebooks as the main technology to run Python environments in a browser. Jupyter is an open-source web application that allows the creation of documents that contain live code, equations, visualizations and narrative text. Two alternatives are Google’s own Colab system or Microsoft’s Azure Notebooks.  

Select an environment manager, a package that allows you to create multiple separate Python environments, so that you can set up PyTorch (the Python version of Torch) and Tensorflow side by side. He used Miniconda for this purpose 

Once the environment is working correctly, the learning developer needs to select a development environment that understands Python in this case. He suggests Visual Studio Code (VSCode) from Microsoft, which is free.  

Read the source articles and information in TheNextWebat Daniel Bourke’s website, in InfoQin gitconnected and about Omar Rabbolini. 



from AI Trends https://ift.tt/34m4auB
via A.I .Kung Fu

Forecasting for Fall Uncertainties 

By Scott Lundstrom, Analyst, Supply Chain Futures 

Over the last several months, the supply chain planning community has been faced with the question of how to deal with increased uncertainty as we enter the fall. While we are adjusting to COVID-19, we are not overcoming it. Pandemic forces will continue to impact our business as we enter the fall and move into winter. Widespread vaccine availability is still 9 to 12 months away for most people. Environmental and climate disruption challenges continue unabated. Political instability and challenges still dominate the front page.  

Scott Lundstrom, Analyst, Supply Chain Futures

Our relatively stable world of global supply chains has been upended in ways we could never imagine. What is a supply chain executive to do? While it might sound obvious at this point, COVID has impressed upon us all the need for digital transformation to drive resiliency and agility into our operations. First and foremost, we need to adopt an outside-in view of the supply chain. Viewing the supply chain as a demand-driven business network is essential to avoid execution failures, excess inventories, and the inevitable bullwhip effects of the chaotic business environment. AI and advanced supply chain and data analytics can help, but only if we have the data and processes required to make use of intelligence in creating agility and resiliency. 

Changes in philosophy and strategy – from efficiency to resiliency. This really has little to do with technology. Change management among senior leaders can be incredibly challenging but is an absolute necessity. Adopting a focus on outside-in thinking and customer experience can be difficult after many years of internal process optimization to reduce costs and minimize inventory. Analytics can play a role in gaining a better understanding of where we are experiencing difficulties, and disappointing customers 

Changes in sourcing agreements to improve supply stability and demand forecasting – Supply chain is a team sport. It is only by working with our partner suppliers that we can improve resiliency. Moves toward more flexible agreements that allow a range of order actions across multiple categories based on demand and availability will help make supply chains less brittle and restrictive. Partner data about tier 2 and tier 3 suppliers can help us improve our planning models to incorporate uncertainty in geopolitical, climate, logistic, and pandemic dimensions. Utilizing better, more detailed data about suppliers may be one of the most important changes we can make in improving the resilience of our planning optimization models. This is also essential data if we hope to utilize machine learning and auto ML in our planning models.  

Changes in logistics planning embracing flexibility and local supply – One of the biggest changes we will see in supply chains this fall is a desire to move toward more local sources of supply. Geographical complexities driven by lockdowns, limited global shipping capacity, and geopolitical instability are causing the pendulum to swing back toward more local sources of supply. 

Changes in supply and demand data requirements and digital twins – Real improvements in supply chain performance require more real time data. Real time data from customers, suppliers, distributors and logistic suppliers needs to be integrated to provide a real time view of the end-to-end process of meeting customer needs. Increasingly, supply chain software providers are turning to digital twin and digital thread data models to help provide this visibility. Advanced analytics and machine learning algorithms are ideally suited to identify and resolve issues when provided with this type of operating framework. Preparing for uncertainty and creating resilience should be a focus of every supply chain organization as we move into the next wave of pandemic uncertainty. Prepared organizations will experience much higher levels of customer satisfaction, and will experience better business outcomes and performance. 

Scott Lundstrom is an analyst focused on the intersection of AI, IoT and Supply Chains. See his blog at Supply Chain Futures. 



from AI Trends https://ift.tt/2Tjs03O
via A.I .Kung Fu

California appeals court: Uber and Lyft likely misclassified drivers as contractors - CNET

With a major lawsuit and a nearly $200 million ballot measure campaign, California has become ground zero for gig worker status.

from CNET News https://ift.tt/34mI7UI
via A.I .Kung Fu

2020 Antarctic ozone hole 'one of the largest and deepest in recent years' - CNET

The hole in the ozone layer keeps coming back to haunt us.

from CNET News https://ift.tt/2IS3h4t
via A.I .Kung Fu

Ghostbusters sneakers! Reebok to drop new Ghost Smashers on Halloween - CNET

Shoe ya gonna call?

from CNET News https://ift.tt/34jl3WL
via A.I .Kung Fu

Google announces Fi phone subscription program where users can buy Pixel 4a for $9 per month for two years costing a total of $216 instead of $349 upfront price (Jay Peters/The Verge)

Jay Peters / The Verge:
Google announces Fi phone subscription program where users can buy Pixel 4a for $9 per month for two years costing a total of $216 instead of $349 upfront price  —  That means you'll pay a total of $216 for the phone over a 24-month subscription  —  You can now buy the Pixel 4A from Google …



from Techmeme https://ift.tt/37zExsm
via A.I .Kung Fu

The best espresso machine for 2020: Cuisinart, Breville, Mr. Coffee and more - CNET

We tested a slew of popular espresso machines from Nespresso, Breville, Mr. Coffee, Cuisinart, DeLonghi and others. Here's what we learned.

from CNET News https://ift.tt/3hIFlfy
via A.I .Kung Fu

31 of the best movies to stream on Netflix - CNET

Don't know what to watch tonight? Here are some of the best movies Netflix has to offer.

from CNET News https://ift.tt/2ITV285
via A.I .Kung Fu

Appeals Court Says Uber and Lyft Must Treat California Drivers as Employees

The ruling adds new urgency to a ballot measure in the state that would exempt the companies from a new labor law intended to give gig workers more employment rights.

from NYT > Technology https://ift.tt/37wNeU4
via A.I .Kung Fu

As voters weigh Prop 22, a California appeals court upheld a lower court ruling ordering Uber and Lyft to stop classifying drivers as independent contractors (Cyrus Farivar/NBC News)

Cyrus Farivar / NBC News:
As voters weigh Prop 22, a California appeals court upheld a lower court ruling ordering Uber and Lyft to stop classifying drivers as independent contractors  —  OAKLAND, Calif.—A California state appellate court on Thursday upheld a lower court's ruling that there was an “overwhelming likelihood” …



from Techmeme https://ift.tt/2IYk7yT
via A.I .Kung Fu

Microsoft, IBM, Nvidia, and others released an open framework to help security analysts detect, counter, and remediate threats against machine learning systems (Kyle Wiggers/VentureBeat)

Kyle Wiggers / VentureBeat:
Microsoft, IBM, Nvidia, and others released an open framework to help security analysts detect, counter, and remediate threats against machine learning systems  —  Microsoft, the nonprofit MITRE Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch today released …



from Techmeme https://ift.tt/3mqBQxt
via A.I .Kung Fu

31 best TV shows to binge-watch on Hulu - CNET

Looking for a great show to watch tonight? Here are some of the best Hulu has to offer.

from CNET News https://ift.tt/34kiuDF
via A.I .Kung Fu

F.T.C. Decision on Pursuing Facebook Antitrust Case Is Said to Be Near

Any action would follow the Justice Department’s landmark suit this week against Google, as a bipartisan tech backlash ramps up.

from NYT > Technology https://ift.tt/3jqJGVI
via A.I .Kung Fu

TikTok will now specify which content policy a removed video violated, after testing the notification feature and seeing a 14% reduction in user appeals (Sean Hollister/The Verge)

Sean Hollister / The Verge:
TikTok will now specify which content policy a removed video violated, after testing the notification feature and seeing a 14% reduction in user appeals  —  You know what you did.  Or do you?  Until recently, TikTok wouldn't necessarily explain why it removed one of your videos from the platform.



from Techmeme https://ift.tt/3oeUFFo
via A.I .Kung Fu

Wednesday, October 21, 2020

Robust Intelligence, which helps developers deploy AI models in a secure manner, comes out of stealth with $14M in seed and Series A funding led by Sequoia (Kenrick Cai/Forbes)

Kenrick Cai / Forbes:
Robust Intelligence, which helps developers deploy AI models in a secure manner, comes out of stealth with $14M in seed and Series A funding led by Sequoia  —  Yaron Singer climbed the tenure track ladder to a full professorship at Harvard in seven years, fueled by his work on adversarial machine learning …



from Techmeme https://ift.tt/3m8pYQn
via A.I .Kung Fu

Best facial recognition security cameras of 2020 - CNET

Want a security camera that IDs faces? Here are your top choices.

from CNET News https://ift.tt/31svRjC
via A.I .Kung Fu

Snap executive says Snapchat's DAU in India grew 150% YoY in Q3, as Snap announces a slate of new India-specific original series and mini games (Vikas SN/The Economic Times)

Vikas SN / The Economic Times:
Snap executive says Snapchat's DAU in India grew 150% YoY in Q3, as Snap announces a slate of new India-specific original series and mini games  —  Snapchat has witnessed nearly 150% year-on-year growth in its daily active users from India in the third quarter of 2020



from Techmeme https://ift.tt/3o9kreh
via A.I .Kung Fu

Federal district court approves the settlement between Kik and the SEC where Kik will pay a $5M fine to settle dispute over a 2017 token sale that raised ~$100M (Isabelle Kirkwood/BetaKit)

Isabelle Kirkwood / BetaKit:
Federal district court approves the settlement between Kik and the SEC where Kik will pay a $5M fine to settle dispute over a 2017 token sale that raised ~$100M  —  A court has approved the settlement between Kitchener-Waterloo messenger app Kik and the US Securities Exchange Commission (SEC), which was proposed earlier this week.



from Techmeme https://ift.tt/31xlevJ
via A.I .Kung Fu

McAfee raises $740M in its IPO at $20 per share in its return to the public market, valuing the company at $8.6B (Crystal Tse/Bloomberg)

Crystal Tse / Bloomberg:
McAfee raises $740M in its IPO at $20 per share in its return to the public market, valuing the company at $8.6B  —  McAfee Corp. and its shareholders priced an initial public offering within a targeted range in the company's return to the stock market, according to people with knowledge of the matter.



from Techmeme https://ift.tt/37txCAJ
via A.I .Kung Fu

Facebook Dating launches in Europe, after Ireland's regulators forced a delay of its planned debut in February, claims it made 1.5B matches across 20 countries (Sam Shead/CNBC)

Sam Shead / CNBC:
Facebook Dating launches in Europe, after Ireland's regulators forced a delay of its planned debut in February, claims it made 1.5B matches across 20 countries  —  - Facebook has expanded its “Facebook Dating” service to Europe.  — Facebook claims that the platform has generated 1.5 …



from Techmeme https://ift.tt/34lpX5H
via A.I .Kung Fu

FBI: Iran, Russia obtained voter data to interfere with US elections - CNET

Both countries have obtained voter registration data, which Iran used to send emails to intimidate voters.

from CNET News https://ift.tt/37vEgGv
via A.I .Kung Fu