Translate

Friday, November 13, 2020

Predicting qualification ranking based on practice session performance for Formula 1 Grand Prix

If you’re a Formula 1 (F1) fan, have you ever wondered why F1 teams have very different performances between qualifying and practice sessions? Why do they have multiple practice sessions in the first place? Can practice session results actually tell something about the upcoming qualifying race? In this post, we answer these questions and more. We show you how we can predict qualifying results based on practice session performances by harnessing the power of data and machine learning (ML). These predictions are being integrated into the new “Qualifying Pace” insight for each F1 Grand Prix (GP). This work is part of the continuous collaboration between F1 and the Amazon ML Solutions Lab to generate new F1 Insights powered by AWS.

Each F1 GP consists of several stages. The event starts with three practice sessions (P1, P2, and P3), followed by a qualifying (Q) session, and then the final race. Teams approach practice and qualifying sessions differently because these sessions serve different purposes. The practice sessions are the teams’ opportunities to test out strategies and tire compounds to gather critical data in preparation for the final race. They observe the car’s performance with different strategies and tire compounds, and use this to determine their overall race strategy.

In contrast, qualifying sessions determine the starting position of each driver on race day. Teams focus solely on obtaining the fastest lap time. Because of this shift in tactics, Friday and Saturday practice session results often fail to accurately predict the qualifying order.

In this post, we introduce deterministic and probabilistic methods to model the time difference between the fastest lap time in practice sessions and the qualifying session (∆t = tq-tp). The goal is to more accurately predict the upcoming qualifying standings based on the practice sessions.

Error sources of ∆t

The delta of the fastest lap time between practice and qualifying sessions (∆t) comes primarily from variations in fuel level and tire grip.

A higher fuel level adds weight to the car and reduces the speed of the car. For practice sessions, teams vary the fuel level as they please. For the second practice session (P2), it’s common to begin with a low fuel level and run with more fuel in the latter part of the session. During qualifying, teams use minimal fuel levels in order to record the fastest lap time. The impact of fuel on lap time varies from circuit to circuit, depending on how many straights the circuit has and how long these straights are.

Tires also play a significant role in an F1 car’s performance. During each GP event, the tire supplier brings various tire types with varying compounds suitable for different racing conditions. Two of these are for wet circuit conditions: intermediate tires for light standing water and wet tires for heavy standing water. The remaining dry running tires can be categorized into three compound types: hard, medium, and soft. These tire compounds provide different grips to the circuit surface. The more grip the tire provides, the faster the car can run.

Past racing results showed that car performance dropped significantly when wet tires were used. For example, in the 2018 Italy GP, because the P1 session was wet and the qualifying session was dry, the fastest lap time in P1 was more than 10 seconds slower than the qualifying session.

Among the dry running types, the hard tire provides the least grip but is the most durable, whereas the soft tire has the most grip but is the least durable. Tires degrade over the course of a race, which reduces the tire grip and slows down the car. Track temperature and moisture affects the progression of degradation, which in turn changes the tire grip. As in the case with fuel level, tire impact on lap time changes from circuit to circuit.

Data and attempted approaches

Given this understanding of factors that can impact lap time, we can use fuel level and tire grip data to estimate the final qualifying lap time based on known practice session performance. However, as of this writing, data records to directly infer fuel level and tire grip during the race are not available. Therefore, we take an alternative approach with data we can currently obtain.

The data we used in the modeling were records of fastest lap times for each GP since 1950 and partial years of weather data for the corresponding sessions. The lap times data included the fastest lap time for each session (P1, P2, P3, and Q) of each GP with the driver, car and team, and circuit name (publicly available on F1’s website). Track wetness and temperature for each corresponding session was available in the weather data.

We explored two implicit methods with the following model inputs: the team and driver name, and the circuit name. Method one was a rule-based empirical model that attributed observed  to circuits and teams. We estimated the latent parameter values (fuel level and tire grip differences specific to each team and circuit) based on their known lap time sensitivities. These sensitivities were provided by F1 and calculated through simulation runs on each circuit track. Method two was a regression model with driver and circuit indicators. The regression model learned the sensitivity of ∆t for each driver on each circuit without explicitly knowing the fuel level and tire grip exerted. We developed and compared deterministic models using XGBoost and AutoGluon, and probabilistic models using PyMC3.

We built models using race data from 2014 to 2019, and tested against race data from 2020. We excluded data from before 2014 because there were significant car development and regulation changes over the years. We removed races in which either the practice or qualifying session was wet because ∆t for those sessions were considered outliers.

Managed model training with Amazon SageMaker

We trained our regression models on Amazon SageMaker.

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy ML models quickly. Specifically for model training, it provides many features to assist with the process.

For our use case, we explored multiple iterations on the choices of model feature sets and hyperparameters. Recording and comparing the model metrics of interest was critical to choosing the most suitable model. The Amazon SageMaker API allowed customized metrics definition prior to launching a model training job, and easy retrieval after the training job was complete. Using the automatic model tuning feature reduced the mean squared error (MSE) metric on the test data by 45% compared to the default hyperparameter choice.

We trained an XGBoost model using the Amazon SageMaker’s built-in implementation. Its built-in implementation allowed us to run model training through a general estimator interface. This approach provided better logging, superior hyperparameter validation, and a larger set of metrics than the original implementation.

Rule-based model

In the rule-based approach, we reason that the differences of lap times ∆t primarily come from systematic variations of tire grip for each circuit and fuel level for each team between practice and qualifying sessions. After accounting for these known variations, we assume residuals are random small numbers with a mean of zero. ∆t can be modeled with the following equation:

∆tf(c) and ∆tg(c) are known sensitivities of fuel mass and tire grip, and  is the residual. A hierarchy exists among the factors contained in the equation. We assume grip variations for each circuit (g(c)) are at the top level. Under each circuit, there are variations of fuel level across teams (f(t,c)).

To further simplify the model, we neglect  because we assume it is small. We further assume fuel variation for each team across all circuits is the same (i.e., f(t,c) = f(t)). We can simplify the model to the following:

Because ∆tf(c) and ∆tg(c) are known, f(t) and g(c), we can estimate team fuel variations and tire grip variations from the data.

The differences in the sensitivities depend on the characteristics of circuits. From the following track maps, we can observe that the Italian GP circuit has fewer corner turns and the straight sections are longer compared to the Singapore GP circuit. Additional tire grip gives a larger advantage in the Singapore GP circuit.

 

ML regression model

For the ML regression method, we don’t directly model the relation between  and fuel level and grip variations. Instead, we fit the following regression model with just the circuit, team, and driver indicator variables:

Ic, It, and Id represent the indicator variables for circuits, teams, and drivers.

Hierarchical Bayesian model

Another challenge with modeling the race pace was due to noisy measurements in lap times. The magnitude of random effect (ϵ) of ∆t could be non-negligible. Such randomness might come from drivers’ accidental drift from their normal practice at the turns or random variations of drivers’ efforts during practice sessions. With deterministic approaches, such random effect wasn’t appropriately captured. Ideally, we wanted a model that could quantify uncertainty about the predictions. Therefore, we explored Bayesian sampling methods.

With a hierarchical Bayesian model, we account for the hierarchical structure of the error sources. As with the rule-based model, we assume grip variations for each circuit (g(c))) are at the top level. The additional benefit of a hierarchical Bayesian model is that it incorporates individual-level variations when estimating group-level coefficients. It’s a middle ground between two extreme views of data. One extreme is to pool data for every group (circuit and driver) without considering the intrinsic variations among groups. The other extreme is to train a regression model for each circuit or driver. With 21 circuits, this amounts to 21 regression models. With a hierarchical model, we have a single model that considers the variations simultaneously at the group and individual level.

We can mathematically describe the underlying statistical model for the hierarchical Bayesian approach as the following varying intercepts model:

Here, i represents the index of each data observation, j represents the index of each driver, and k represents the index of each circuit. μjk represents the varying intercept for each driver under each circuit, and θk represents the varying intercept for each circuit. wp and wq represent the wetness level of the track during practice and qualifying sessions, and ∆T represents the track temperature difference.

Test models in the 2020 races

After predicting ∆t, we added it into the practice lap times to generate predictions of qualifying lap times. We determined the final ranking based on the predicted qualifying lap times. Finally, we compared predicted lap times and rankings with the actual results.

The following figure compares the predicted rankings and the actual rankings for all three practice sessions for the Austria, Hungary, and Great Britain GPs in 2020 (we exclude P2 for the Hungary GP because the session was wet).

For the Bayesian model, we generated predictions with an uncertainty range based on the posterior samples. This enabled us to predict the ranking of the drivers relatively with the median while accounting for unexpected outcomes in the drivers’ performances.

The following figure shows an example of predicted qualifying lap times (in seconds) with an uncertainty range for selected drivers at the Austria GP. If two drivers’ prediction profiles are very close (such as MAG and GIO), it’s not surprising that either driver might be the faster one in the upcoming qualifying session.

Metrics on model performance

To compare the models, we used mean squared error (MSE) and mean absolute error (MAE) for lap time errors. For ranking errors, we used rank discounted cumulative gain (RDCG). Because only the top 10 drivers gain points during a race, we used RDCG to apply more weight to errors in the higher rankings. For the Bayesian model output, we used median posterior value to generate the metrics.

The following table shows the resulting metrics of each modeling approach for the test P2 and P3 sessions. The best model by each metric for each session is highlighted.

MODEL MSE MAE RDCG
  P2 P3 P2 P3 P2 P3
Practice raw 2.822 1.053 1.544 0.949 0.92 0.95
Rule-based 0.349 0.186 0.462 0.346 0.88 0.95
XGBoost 0.358 0.141 0.472 0.297 0.91 0.95
AutoGluon 0.567 0.351 0.591 0.459 0.90 0.96
Hierarchical Bayesian 0.431 0.186 0.521 0.332 0.87 0.92

All models reduced the qualifying lap time prediction errors significantly compared to directly using the practice session results. Using practice lap times directly without considering pace correction, the MSE on the predicted qualifying lap time was up to 2.8 seconds. With machine learning methods which automatically learned pace variation patterns for teams and drivers on different circuits, we brought the MSE down to smaller than half a second. The resulting prediction was a more accurate representation of the pace in the qualifying session. In addition, the models improved the prediction of rankings by a small margin. However, there was no one single approach that outperformed all others. This observation highlighted the effect of random errors on the underlying data.

Summary

In this post, we described a new Insight developed by the Amazon ML Solutions Lab in collaboration with Formula 1 (F1).

This work is part of the six new F1 Insights powered by AWS that are being released in 2020, as F1 continues to use AWS for advanced data processing and ML modeling. Fans can expect to see this new Insight unveiled at the 2020 Turkish GP to provide predictions for the upcoming qualifying races at practice sessions.

If you’d like help accelerating the use of ML in your products and services, please contact the Amazon ML Solutions Lab .

 


About the Author

Guang Yang is a data scientist at the Amazon ML Solutions Lab where he works with customers across various verticals and applies creative problem solving to generate value for customers with state-of-the-art ML/AI solutions.



from AWS Machine Learning Blog https://ift.tt/3nqbL22
via A.I .Kung Fu

Black Friday Walmart deals: $199 robot vacuum, $194 AirPods Pro available now, $35 Keurig and $119 GoPro coming soon - CNET

The next phase of the retailer's early sales is happening now -- but the best stuff is going fast.

from CNET News https://ift.tt/3plTSD4
via A.I .Kung Fu

Best Black Friday 2020 TV deals: $100 32-incher, $250 55-inch TCL, plus more soon - CNET

It's the best time of the year to save on new TVs from TCL, LG, Sony and more.

from CNET News https://ift.tt/3kwYCCh
via A.I .Kung Fu

Social media: How can we protect its youngest users?

A psychologist says parents need to provide their children with "digital resilience".

from BBC News - Technology https://ift.tt/2Urn1yI
via A.I .Kung Fu

Predicting qualification ranking based on practice session performance for Formula 1 Grand Prix

If you’re a Formula 1 (F1) fan, have you ever wondered why F1 teams have very different performances between qualifying and practice sessions? Why do they have multiple practice sessions in the first place? Can practice session results actually tell something about the upcoming qualifying race? In this post, we answer these questions and more. We show you how we can predict qualifying results based on practice session performances by harnessing the power of data and machine learning (ML). These predictions are being integrated into the new “Qualifying Pace” insight for each F1 Grand Prix (GP). This work is part of the continuous collaboration between F1 and the Amazon ML Solutions Lab to generate new F1 Insights powered by AWS.

Each F1 GP consists of several stages. The event starts with three practice sessions (P1, P2, and P3), followed by a qualifying (Q) session, and then the final race. Teams approach practice and qualifying sessions differently because these sessions serve different purposes. The practice sessions are the teams’ opportunities to test out strategies and tire compounds to gather critical data in preparation for the final race. They observe the car’s performance with different strategies and tire compounds, and use this to determine their overall race strategy.

In contrast, qualifying sessions determine the starting position of each driver on race day. Teams focus solely on obtaining the fastest lap time. Because of this shift in tactics, Friday and Saturday practice session results often fail to accurately predict the qualifying order.

In this post, we introduce deterministic and probabilistic methods to model the time difference between the fastest lap time in practice sessions and the qualifying session (∆t = tq-tp). The goal is to more accurately predict the upcoming qualifying standings based on the practice sessions.

Error sources of ∆t

The delta of the fastest lap time between practice and qualifying sessions (∆t) comes primarily from variations in fuel level and tire grip.

A higher fuel level adds weight to the car and reduces the speed of the car. For practice sessions, teams vary the fuel level as they please. For the second practice session (P2), it’s common to begin with a low fuel level and run with more fuel in the latter part of the session. During qualifying, teams use minimal fuel levels in order to record the fastest lap time. The impact of fuel on lap time varies from circuit to circuit, depending on how many straights the circuit has and how long these straights are.

Tires also play a significant role in an F1 car’s performance. During each GP event, the tire supplier brings various tire types with varying compounds suitable for different racing conditions. Two of these are for wet circuit conditions: intermediate tires for light standing water and wet tires for heavy standing water. The remaining dry running tires can be categorized into three compound types: hard, medium, and soft. These tire compounds provide different grips to the circuit surface. The more grip the tire provides, the faster the car can run.

Past racing results showed that car performance dropped significantly when wet tires were used. For example, in the 2018 Italy GP, because the P1 session was wet and the qualifying session was dry, the fastest lap time in P1 was more than 10 seconds slower than the qualifying session.

Among the dry running types, the hard tire provides the least grip but is the most durable, whereas the soft tire has the most grip but is the least durable. Tires degrade over the course of a race, which reduces the tire grip and slows down the car. Track temperature and moisture affects the progression of degradation, which in turn changes the tire grip. As in the case with fuel level, tire impact on lap time changes from circuit to circuit.

Data and attempted approaches

Given this understanding of factors that can impact lap time, we can use fuel level and tire grip data to estimate the final qualifying lap time based on known practice session performance. However, as of this writing, data records to directly infer fuel level and tire grip during the race are not available. Therefore, we take an alternative approach with data we can currently obtain.

The data we used in the modeling were records of fastest lap times for each GP since 1950 and partial years of weather data for the corresponding sessions. The lap times data included the fastest lap time for each session (P1, P2, P3, and Q) of each GP with the driver, car and team, and circuit name (publicly available on F1’s website). Track wetness and temperature for each corresponding session was available in the weather data.

We explored two implicit methods with the following model inputs: the team and driver name, and the circuit name. Method one was a rule-based empirical model that attributed observed  to circuits and teams. We estimated the latent parameter values (fuel level and tire grip differences specific to each team and circuit) based on their known lap time sensitivities. These sensitivities were provided by F1 and calculated through simulation runs on each circuit track. Method two was a regression model with driver and circuit indicators. The regression model learned the sensitivity of ∆t for each driver on each circuit without explicitly knowing the fuel level and tire grip exerted. We developed and compared deterministic models using XGBoost and AutoGluon, and probabilistic models using PyMC3.

We built models using race data from 2014 to 2019, and tested against race data from 2020. We excluded data from before 2014 because there were significant car development and regulation changes over the years. We removed races in which either the practice or qualifying session was wet because ∆t for those sessions were considered outliers.

Managed model training with Amazon SageMaker

We trained our regression models on Amazon SageMaker.

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy ML models quickly. Specifically for model training, it provides many features to assist with the process.

For our use case, we explored multiple iterations on the choices of model feature sets and hyperparameters. Recording and comparing the model metrics of interest was critical to choosing the most suitable model. The Amazon SageMaker API allowed customized metrics definition prior to launching a model training job, and easy retrieval after the training job was complete. Using the automatic model tuning feature reduced the mean squared error (MSE) metric on the test data by 45% compared to the default hyperparameter choice.

We trained an XGBoost model using the Amazon SageMaker’s built-in implementation. Its built-in implementation allowed us to run model training through a general estimator interface. This approach provided better logging, superior hyperparameter validation, and a larger set of metrics than the original implementation.

Rule-based model

In the rule-based approach, we reason that the differences of lap times ∆t primarily come from systematic variations of tire grip for each circuit and fuel level for each team between practice and qualifying sessions. After accounting for these known variations, we assume residuals are random small numbers with a mean of zero. ∆t can be modeled with the following equation:

∆tf(c) and ∆tg(c) are known sensitivities of fuel mass and tire grip, and  is the residual. A hierarchy exists among the factors contained in the equation. We assume grip variations for each circuit (g(c)) are at the top level. Under each circuit, there are variations of fuel level across teams (f(t,c)).

To further simplify the model, we neglect  because we assume it is small. We further assume fuel variation for each team across all circuits is the same (i.e., f(t,c) = f(t)). We can simplify the model to the following:

Because ∆tf(c) and ∆tg(c) are known, f(t) and g(c), we can estimate team fuel variations and tire grip variations from the data.

The differences in the sensitivities depend on the characteristics of circuits. From the following track maps, we can observe that the Italian GP circuit has fewer corner turns and the straight sections are longer compared to the Singapore GP circuit. Additional tire grip gives a larger advantage in the Singapore GP circuit.

 

ML regression model

For the ML regression method, we don’t directly model the relation between  and fuel level and grip variations. Instead, we fit the following regression model with just the circuit, team, and driver indicator variables:

Ic, It, and Id represent the indicator variables for circuits, teams, and drivers.

Hierarchical Bayesian model

Another challenge with modeling the race pace was due to noisy measurements in lap times. The magnitude of random effect (ϵ) of ∆t could be non-negligible. Such randomness might come from drivers’ accidental drift from their normal practice at the turns or random variations of drivers’ efforts during practice sessions. With deterministic approaches, such random effect wasn’t appropriately captured. Ideally, we wanted a model that could quantify uncertainty about the predictions. Therefore, we explored Bayesian sampling methods.

With a hierarchical Bayesian model, we account for the hierarchical structure of the error sources. As with the rule-based model, we assume grip variations for each circuit (g(c))) are at the top level. The additional benefit of a hierarchical Bayesian model is that it incorporates individual-level variations when estimating group-level coefficients. It’s a middle ground between two extreme views of data. One extreme is to pool data for every group (circuit and driver) without considering the intrinsic variations among groups. The other extreme is to train a regression model for each circuit or driver. With 21 circuits, this amounts to 21 regression models. With a hierarchical model, we have a single model that considers the variations simultaneously at the group and individual level.

We can mathematically describe the underlying statistical model for the hierarchical Bayesian approach as the following varying intercepts model:

Here, i represents the index of each data observation, j represents the index of each driver, and k represents the index of each circuit. μjk represents the varying intercept for each driver under each circuit, and θk represents the varying intercept for each circuit. wp and wq represent the wetness level of the track during practice and qualifying sessions, and ∆T represents the track temperature difference.

Test models in the 2020 races

After predicting ∆t, we added it into the practice lap times to generate predictions of qualifying lap times. We determined the final ranking based on the predicted qualifying lap times. Finally, we compared predicted lap times and rankings with the actual results.

The following figure compares the predicted rankings and the actual rankings for all three practice sessions for the Austria, Hungary, and Great Britain GPs in 2020 (we exclude P2 for the Hungary GP because the session was wet).

For the Bayesian model, we generated predictions with an uncertainty range based on the posterior samples. This enabled us to predict the ranking of the drivers relatively with the median while accounting for unexpected outcomes in the drivers’ performances.

The following figure shows an example of predicted qualifying lap times (in seconds) with an uncertainty range for selected drivers at the Austria GP. If two drivers’ prediction profiles are very close (such as MAG and GIO), it’s not surprising that either driver might be the faster one in the upcoming qualifying session.

Metrics on model performance

To compare the models, we used mean squared error (MSE) and mean absolute error (MAE) for lap time errors. For ranking errors, we used rank discounted cumulative gain (RDCG). Because only the top 10 drivers gain points during a race, we used RDCG to apply more weight to errors in the higher rankings. For the Bayesian model output, we used median posterior value to generate the metrics.

The following table shows the resulting metrics of each modeling approach for the test P2 and P3 sessions. The best model by each metric for each session is highlighted.

MODEL MSE MAE RDCG
  P2 P3 P2 P3 P2 P3
Practice raw 2.822 1.053 1.544 0.949 0.92 0.95
Rule-based 0.349 0.186 0.462 0.346 0.88 0.95
XGBoost 0.358 0.141 0.472 0.297 0.91 0.95
AutoGluon 0.567 0.351 0.591 0.459 0.90 0.96
Hierarchical Bayesian 0.431 0.186 0.521 0.332 0.87 0.92

All models reduced the qualifying lap time prediction errors significantly compared to directly using the practice session results. Using practice lap times directly without considering pace correction, the MSE on the predicted qualifying lap time was up to 2.8 seconds. With machine learning methods which automatically learned pace variation patterns for teams and drivers on different circuits, we brought the MSE down to smaller than half a second. The resulting prediction was a more accurate representation of the pace in the qualifying session. In addition, the models improved the prediction of rankings by a small margin. However, there was no one single approach that outperformed all others. This observation highlighted the effect of random errors on the underlying data.

Summary

In this post, we described a new Insight developed by the Amazon ML Solutions Lab in collaboration with Formula 1 (F1).

This work is part of the six new F1 Insights powered by AWS that are being released in 2020, as F1 continues to use AWS for advanced data processing and ML modeling. Fans can expect to see this new Insight unveiled at the 2020 Turkish GP to provide predictions for the upcoming qualifying races at practice sessions.

If you’d like help accelerating the use of ML in your products and services, please contact the Amazon ML Solutions Lab .

 


About the Author

Guang Yang is a data scientist at the Amazon ML Solutions Lab where he works with customers across various verticals and applies creative problem solving to generate value for customers with state-of-the-art ML/AI solutions.



from AWS Machine Learning Blog https://ift.tt/3nqbL22
via A.I .Kung Fu

Hisense 70-inch 4K UHD Smart Android TV deal is still available for $400 - CNET

One day later, still in stock -- for now. Plus, turn it (or any other) into a Roku TV for $29.

from CNET News https://ift.tt/3pv8Jeo
via A.I .Kung Fu

What The Crown season 4 gets right (and wrong) about Princess Diana - CNET

Real vs Netflix reel: The Spencer tiara should NOT look like a Burger King crown.

from CNET News https://ift.tt/32ZTGQD
via A.I .Kung Fu

Apple Watch SE Black Friday deal: 44mm back to $260 at Amazon - CNET

The weekend sale on the Apple Watch SE returns after a brief hiatus.

from CNET News https://ift.tt/3lmRHwE
via A.I .Kung Fu

A scheme to hand Trump electors in state legislatures is highly unlikely to happen.



from NYT > Technology https://ift.tt/3eYUkmb
via A.I .Kung Fu

AI services company C3.ai files for IPO, reports revenue of $157M in the fiscal year ending April 2020, up 71% YoY, and a deficit of $293M at the end of July (Tiernan Ray/ZDNet)

Tiernan Ray / ZDNet:
AI services company C3.ai files for IPO, reports revenue of $157M in the fiscal year ending April 2020, up 71% YoY, and a deficit of $293M at the end of July  —  The software-as-a-service company that has been using masses of GPUs to run deep learning programs plans to list under the ticker “AI.”



from Techmeme https://ift.tt/3nqOj4m
via A.I .Kung Fu

Ticketmaster's UK wing fined ~$1.6M by the UK ICO after a report found they failed to put appropriate security measures in place prior to their 2018 data breach (Shoshana Wodinsky/Gizmodo)

Shoshana Wodinsky / Gizmodo:
Ticketmaster's UK wing fined ~$1.6M by the UK ICO after a report found they failed to put appropriate security measures in place prior to their 2018 data breach  —  Ticketmaster's UK wing has been fined £1.25 million pounds (roughly $1.6 millions) following an investigation …



from Techmeme https://ift.tt/35wi65U
via A.I .Kung Fu

AI Weekly: Tech, power, and building the Biden administration

President-elect Joe Biden addresses the nation from the Chase Center November 07, 2020 in Wilmington, Delaware.
The presidential election is over, and debate over tech, power, and what the Biden administration should look like is in full swing.Read More

from VentureBeat https://ift.tt/3eVOqSN
via A.I .Kung Fu

Hackers sponsored by Russia and North Korea are targeting COVID-19 researchers

Hackers sponsored by Russia and North Korea are targeting COVID-19 researchers

Enlarge (credit: Getty Images)

Hackers sponsored by the Russian and North Korean governments have been targeting companies directly involved in researching vaccines and treatments for COVID-19, and in some cases, the attacks have succeeded, Microsoft said on Friday.

In all, there are seven prominent companies that have been targeted, Microsoft Corporate VP for Customer Security & Trust Tom Burt said. They include vaccine makers with COVID-19 vaccines in various clinical trial stages, a clinical research organization involved in trials, and a developer of a COVID-19 test. Also targeted were organizations with contracts with or investments from governmental agencies around the world for COVID-19-related work. The targets are located in the US, Canada, France, India, and South Korea.

“Microsoft is calling on the world’s leaders to affirm that international law protects health care facilities and to take action to enforce the law,” Burt wrote in a blog post. “We believe the law should be enforced not just when attacks originate from government agencies but also when they originate from criminal groups that governments enable to operate—or even facilitate—within their borders. This is criminal activity that cannot be tolerated.”

Read 6 remaining paragraphs | Comments



from Biz & IT – Ars Technica https://ift.tt/2H38aHy
via A.I .Kung Fu

How critical is the weather for the SpaceX launch?

Nasa and SpaceX were due to send astronauts to the ISS on Saturday but the weather's changed their plans.

from BBC News - Technology https://ift.tt/36CRPlF
via A.I .Kung Fu

Black Friday 2020 ad scans: See the best deals and sales at Walmart, Best Buy, HP, Newegg, GameStop and more - CNET

Looking for Black Friday ads from major retailers? This is your home base.

from CNET News https://ift.tt/36xYUnA
via A.I .Kung Fu

Apple TV Plus: Best movies, TV shows and documentaries streaming now - CNET

Wondering how to allocate your Apple TV Plus time? Here are some ideas.

from CNET News https://ift.tt/2ICb9a6
via A.I .Kung Fu

Disney's live-action Lilo & Stitch one step closer with new director reportedly tapped - CNET

The company is in talks with Crazy Rich Asians director Jon M. Chu, according to The Hollywood Reporter.

from CNET News https://ift.tt/38HfTX2
via A.I .Kung Fu

Need luggage? Save 35% on anything at Monos right now - CNET

Monos offers a premium luggage experience at more down-to-earth prices.

from CNET News https://ift.tt/3nA9nG3
via A.I .Kung Fu

Oculus Quest update adds fitness tracker app, and Quest 2 gets 90Hz games - CNET

Oculus Move is a central fitness dashboard that rolls out next week, and updated games for Quest 2 are coming, too.

from CNET News https://ift.tt/2IyhIL8
via A.I .Kung Fu

Facebook plans to use machine learning to sort its moderation queue prior to human review, prioritizing viral and potentially severe content first (Kyle Wiggers/VentureBeat)

Kyle Wiggers / VentureBeat:
Facebook plans to use machine learning to sort its moderation queue prior to human review, prioritizing viral and potentially severe content first  —  Facebook says it's using AI to prioritize potentially problematic posts for human moderators to review as it works to more quickly remove content that violates its community guidelines.



from Techmeme https://ift.tt/32KMdER
via A.I .Kung Fu

Oculus Quest 2 update adds support for 90Hz games; an Oculus Move app for tracking fitness metrics while playing is coming in the following week (Cameron Faulkner/The Verge)

Cameron Faulkner / The Verge:
Oculus Quest 2 update adds support for 90Hz games; an Oculus Move app for tracking fitness metrics while playing is coming in the following week  —  Plus, the mobile app will be able to record the headset's screen in late November  —  Oculus has rolled out its first big update …



from Techmeme https://ift.tt/2IDogbs
via A.I .Kung Fu

Thursday, November 12, 2020

AWS expands language support for Amazon Lex and Amazon Polly

At AWS, our mission is to enable developers and businesses with no prior machine learning (ML) expertise to easily build sophisticated, scalable, ML-powered applications with our AI services. Today, we’re excited to announce that Amazon Lex and Amazon Polly are expanding language support. You can build ML-powered applications that fit the language preferences of your users. These easy-to-use services allow you to add intelligence to your business processes, automate workstreams, reduce costs, and improve the user experience for your customers and employees in a variety of languages.

New and improved features

Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex now supports French, Spanish, Italian and Canadian French. With the addition of these new languages, you can build and expand your conversational experiences to better understand and engage your customer base in a variety of different languages and accents. Amazon Lex can be applied to a diverse set of use cases such as virtual agents, conversational IVR systems, self-service chatbots, or application bots. For a full list of languages, please go to Amazon Lex languages.

Amazon Polly, a service that turns text into lifelike speech offers voices for all Amazon Lex languages. Our first Australian English voice, Olivia, is now generally available in Neural Text-to-Speech (NTTS). Olivia’s unique vocal personality and voice sounds expressive, natural and is easy to follow. You can now choose among three Australian English voices: Russell, Nicole and Olivia. For a full list of Amazon Polly’s voices, please go to Amazon Polly voices.

“Growing demand for conversational experiences led us to launch Amazon Lex and Amazon Polly to enable businesses to connect with their customers more effectively,” shares Julien Simon, AWS AIML evangelist.

“Amazon Lex uses automatic speech recognition and natural language understanding to help organizations understand a customer’s intent, fluidly manage conversations and create highly engaging and lifelike interactions. We are delighted to advance the language capabilities of Lex and Polly. These launches allow our customers to take advantage of AI in the area of conversational interfaces and voice AI,” Simon says.

“Amazon Lex is a core AWS service that enables Accenture to deliver next-generation, omnichannel contact center solutions, such as our Advanced Customer Engagement (ACE+) platform, to a diverse set of customers. The addition of French, Italian, and Spanish to Amazon Lex will further enhance the accessibility of our global customer engagement solutions, while also vastly enriching and personalizing the overall experience for people whose primary language is not English. Now, we can quickly build interactive digital solutions based on Amazon’s deep learning expertise to deflect more calls, reduce contact center costs and drive a better customer experience in French, Italian, and Spanish-speaking markets. Amazon Lex can now improve customer satisfaction and localized brand awareness even more effectively,” says J.C. Novoa, Advanced Customer Engagement (ACE+) for Accenture.

Another example is Clevy, a French start-up and AWS customer. François Falala-Sechet, the CTO of Clevy adds, “At Clevy, we have been utilizing Amazon Lex’s best-in-class natural language processing services to help bring customers a scalable low-code approach to designing, developing, deploying and maintaining rich conversational experiences with more powerful and more integrated chatbots. With the addition of Spanish, Italian and French in Amazon Lex, Clevy can now help our developers deliver chatbot experiences to a more diverse audience in our core European markets.”

Eudata helps customers implement effective contact and management systems. Andrea Grompone, the Head of Contact Center Delivery at Eudata says, “Ora Amazon Lex parla in italiano! We are excited about the new opportunities this opens for Eudata. Amazon Lex simplifies the process of creating automated dialog-based interactions to address challenges we see in the market. The addition of Italian allows us to build a customer experience that ensures both service speed and quality in our markets.”

Using the new features

To use the new Amazon Lex languages, simply choose the language when creating a new bot via the  Amazon Lex console or AWS SDK. The following screenshot shows the console view.

To learn more, visit the Amazon Lex Development Guide.

You can use new Olivia voice in the Amazon Polly console, the AWS Command Line Interface (AWS CLI), or AWS SDK. The feature is available across all AWS Regions supporting NTTS. For the full list of available voices, see Voices in Amazon Polly, or log in to the Amazon Polly console to try it out for yourself.

Summary

Use Amazon Lex and Amazon Polly to build more self-service bots, to voice-enable applications, and to create an integrated voice and text experience for your customers and employees in a variety of languages. Try them out for yourself!

 


About the Author

Esther Lee is a Product Manager for AWS Language AI Services. She is passionate about the intersection of technology and education. Out of the office, Esther enjoys long walks along the beach, dinners with friends and friendly rounds of Mahjong.



from AWS Machine Learning Blog https://ift.tt/32AVoHK
via A.I .Kung Fu

AWS expands language support for Amazon Lex and Amazon Polly

At AWS, our mission is to enable developers and businesses with no prior machine learning (ML) expertise to easily build sophisticated, scalable, ML-powered applications with our AI services. Today, we’re excited to announce that Amazon Lex and Amazon Polly are expanding language support. You can build ML-powered applications that fit the language preferences of your users. These easy-to-use services allow you to add intelligence to your business processes, automate workstreams, reduce costs, and improve the user experience for your customers and employees in a variety of languages.

New and improved features

Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex now supports French, Spanish, Italian and Canadian French. With the addition of these new languages, you can build and expand your conversational experiences to better understand and engage your customer base in a variety of different languages and accents. Amazon Lex can be applied to a diverse set of use cases such as virtual agents, conversational IVR systems, self-service chatbots, or application bots. For a full list of languages, please go to Amazon Lex languages.

Amazon Polly, a service that turns text into lifelike speech offers voices for all Amazon Lex languages. Our first Australian English voice, Olivia, is now generally available in Neural Text-to-Speech (NTTS). Olivia’s unique vocal personality and voice sounds expressive, natural and is easy to follow. You can now choose among three Australian English voices: Russell, Nicole and Olivia. For a full list of Amazon Polly’s voices, please go to Amazon Polly voices.

“Growing demand for conversational experiences led us to launch Amazon Lex and Amazon Polly to enable businesses to connect with their customers more effectively,” shares Julien Simon, AWS AIML evangelist.

“Amazon Lex uses automatic speech recognition and natural language understanding to help organizations understand a customer’s intent, fluidly manage conversations and create highly engaging and lifelike interactions. We are delighted to advance the language capabilities of Lex and Polly. These launches allow our customers to take advantage of AI in the area of conversational interfaces and voice AI,” Simon says.

“Amazon Lex is a core AWS service that enables Accenture to deliver next-generation, omnichannel contact center solutions, such as our Advanced Customer Engagement (ACE+) platform, to a diverse set of customers. The addition of French, Italian, and Spanish to Amazon Lex will further enhance the accessibility of our global customer engagement solutions, while also vastly enriching and personalizing the overall experience for people whose primary language is not English. Now, we can quickly build interactive digital solutions based on Amazon’s deep learning expertise to deflect more calls, reduce contact center costs and drive a better customer experience in French, Italian, and Spanish-speaking markets. Amazon Lex can now improve customer satisfaction and localized brand awareness even more effectively,” says J.C. Novoa, Global Technical Evangelist – Advanced Customer Engagement (ACE+) for Accenture.

Another example is Clevy, a French start-up and AWS customer. François Falala-Sechet, the CTO of Clevy adds, “At Clevy, we have been utilizing Amazon Lex’s best-in-class natural language processing services to help bring customers a scalable low-code approach to designing, developing, deploying and maintaining rich conversational experiences with more powerful and more integrated chatbots. With the addition of Spanish, Italian and French in Amazon Lex, Clevy can now help our developers deliver chatbot experiences to a more diverse audience in our core European markets.”

Eudata helps customers implement effective contact and management systems. Andrea Grompone, the Head of Contact Center Delivery at Eudata says, “Ora Amazon Lex parla in italiano! We are excited about the new opportunities this opens for Eudata. Amazon Lex simplifies the process of creating automated dialog-based interactions to address challenges we see in the market. The addition of Italian allows us to build a customer experience that ensures both service speed and quality in our markets.”

Using the new features

To use the new Amazon Lex languages, simply choose the language when creating a new bot via the  Amazon Lex console or AWS SDK. The following screenshot shows the console view.

To learn more, visit the Amazon Lex Development Guide.

You can use new Olivia voice in the Amazon Polly console, the AWS Command Line Interface (AWS CLI), or AWS SDK. The feature is available across all AWS Regions supporting NTTS. For the full list of available voices, see Voices in Amazon Polly, or log in to the Amazon Polly console to try it out for yourself.

Summary

Use Amazon Lex and Amazon Polly to build more self-service bots, to voice-enable applications, and to create an integrated voice and text experience for your customers and employees in a variety of languages. Try them out for yourself!

 


About the Author

Esther Lee is a Product Manager for AWS Language AI Services. She is passionate about the intersection of technology and education. Out of the office, Esther enjoys long walks along the beach, dinners with friends and friendly rounds of Mahjong.



from AWS Machine Learning Blog https://ift.tt/32AVoHK
via A.I .Kung Fu

Join the Final Lap of the 2020 DeepRacer League at AWS re:Invent 2020

AWS DeepRacer is the fastest way to get rolling with machine learning (ML). It’s a fully autonomous 1/18th scale race car driven by reinforcement learning, a 3D racing simulator, and a global racing league. Throughout 2020, tens of thousands of developers honed their ML skills and competed in the League’s virtual circuit via the AWS DeepRacer console and 14 AWS Summit online events.

The AWS DeepRacer League’s 2020 season is nearing the final lap with the Championship at AWS re:Invent 2020. From November 10 through December 15, there are three ways to join in the racing fun: learn how to develop a competitive reinforcement learning model through our sessions, enter and compete in the racing action for a chance to win prizes, and watch to cheer on other developers as they race for the cup. More than 100 racers have already qualified for the Championship Cup, but there is still time to compete. Log in today to qualify for a chance to win the Championship Cup by entering the Wildcard round, offering the top 5 racers spots in the Knockout Rounds. Starting December 1, it’s time for the Knockout Rounds to start – and for racers to compete all the way to the checkered flag and the Championship Cup. The Grand Prize winner will receive a choice of either 10,000 USD AWS promotional credits and a chance to win an expenses-paid trip to an F1 Grand Prix in the upcoming 2021 season or a Coursera online Machine Learning degree scholarship with a maximum value of up to 20,000 USD. See our AWS DeepRacer 2020 Championships Official Rules for more details.

Watch the latest episode of DRTV news to learn more about how the Championship at AWS re:Invent 2020 will work.

Congratulations to our 2020 AWS re:Invent Championship Finalists!

Thanks to the thousands of developers who competed in the 2020 AWS DeepRacer League. Below is the list of our Virtual and Summit Online Circuit winners who qualified for the Championship at AWS re:Invent 2020.

Last chance for the Championship: Enter the Wildcard

Are you yet to qualify for the Championship Cup this season? Are you brand new to the league and want to take a shot at the competition? Well, you have one last chance to qualify with the Wildcard. Through November, the open-play wildcard race will be open. This race is a traditional virtual circuit style time trial race, taking place in the AWS DeepRacer console. Participants have until 11:59pm UTC November 30 (6:59pm EST, 3:59pm PST) to submit their fastest model. The top five competitors from the wildcard race will advance to the Championship Cup knockout.

Don’t worry if you don’t advance to the next round. There are chances for developers of all skill levels to compete and win at AWS re:Invent, including the AWS DeepRacer League open racing and special live virtual races. Visit our DeepRacer page for complete race schedule and additional details.

Here’s an overview of how the Championships are organized and how many racers participate in each round from qualifying through to the Grand Prix Finale.

Round 1: Live Group Knockouts

On December 1, racers need to be ready for anything in the championships, no matter what road blocks they may come across. In Round 1, competitors have the opportunity to participate in a brand-new live racing format on the console. Racers submit their best models and control maximum speed remotely from anywhere in the world, while their autonomous models attempt to navigate the track, complete with objects to avoid. They’ll have 3 minutes to try to achieve their single best lap to top the leaderboard. Racers will be split into eight groups based on their time zone, with start order determined by the warmup round (with the fastest racers from round 1 getting to go last in their group). The top four times in each group will advance to our bracket round. Tune in to AWS DeepRacer TV  throughout AWS re:Invent to catch the championship action. 

Round 2: Bracket Elimination

The top 32 remaining competitors will be placed into a single elimination bracket, where they face off against one another in a head-to-head format in a five-lap race. Head-to-head virtual matchups will proceed until eight racers remain. Results will be released on the AWS DeepRacer League page and in the console. 

Round 3: Grand Prix Finale

The final race will take place before the closing keynote on December 15 as an eight-person virtual Grand Prix. Similar to the F1 ProAm in May, our eight finalists will submit their model on the console and the AWS DeepRacer team will run the Grand Prix, where the eight racers simultaneously face off on the track in simulation, to complete five laps. The first car to successfully complete 5 laps and cross the finish line will be crowned the 2020 AWS DeepRacer Champion and officially announced at the closing keynote.

More Options for your ML Journey

If you’re ready to get over the starting line on your ML journey, AWS DeepRacer re:Invent sessions are the best place to learn ML fast.  In 2020, we have not one, not two, but three levels of ML content for aspiring developers to go from zero to hero in no time! Register now for AWS re:Invent to learn more about session schedules when they become available.

  • Get rolling with Machine Learning on AWS DeepRacer (200L). Get hands-on with AWS DeepRacer, including exciting announcements and enhancements coming to the league in 2021. Learn about the basics of machine learning and reinforcement learning (a machine learning technique ideal for autonomous driving). In this session, you can build a reinforcement learning model and submit that model to the AWS DeepRacer League for a chance to win prizes and glory.
  • Shift your Machine Learning model into overdrive with AWS DeepRacer analysis tools (300L). Make your way from the middle of the pack to the top of the AWS DeepRacer podium! This session extends your machine learning skills by exploring how human analysis of reinforcement learning through logs will improve your performance through trend identification and optimization to better prepare for new racing divisions coming to the league in 2021.
  • Replicate AWS DeepRacer architecture to master the track with SageMaker Notebooks (400L). Complete the final lap on your machine learning journey by demystifying the underlying architecture of AWS DeepRacer using Amazon SageMaker, AWS RoboMaker, and Amazon Kinesis Video Streams. Dive into SageMaker notebooks to learn how others have applied the skills acquired through AWS DeepRacer to real-world use cases and how you can apply your reinforcement learning models to relevant use cases.

You can take all the courses live during re:Invent or learn at your own speed on-demand. It’s up to you.  Visit the DeepRacer page at AWS re:Invent to register and find out more on when sessions will be available.

As you can see, there are many opportunities to up-level your ML skills, join in the racing action and cheer on developers as they go for the Championship Cup. Watch this page for schedule and video updates all through AWS re:Invent 2020!

 


About the Author

Dan McCorriston is a Senior Product Marketing Manager for AWS Machine Learning. He is passionate about technology, collaborating with developers, and creating new methods of expanding technology education. Out of the office he likes to hike, cook and spend time with his family.



from AWS Machine Learning Blog https://ift.tt/36yqVer
via A.I .Kung Fu

iOS 14.3 beta code indicates Apple may suggest third-party apps to users during the iPhone or iPad setup process, likely to appease antitrust concerns (Filipe Espósito/9to5Mac)

Filipe Espósito / 9to5Mac:
iOS 14.3 beta code indicates Apple may suggest third-party apps to users during the iPhone or iPad setup process, likely to appease antitrust concerns  —  As Apple has been investigating for anti-competitive practices, the company is working on new ways to avoid these accusations and even sanctions from governments around the world.



from Techmeme https://ift.tt/36ut23c
via A.I .Kung Fu

Many Mac users experienced app slowdowns during the launch of Big Sur, possibly due to issues with Apple's OCSP service being unable to validate certificates (Ars Technica)

Ars Technica:
Many Mac users experienced app slowdowns during the launch of Big Sur, possibly due to issues with Apple's OCSP service being unable to validate certificates  —  Even Macs that didn't upgrade to Big Sur had problems.  —  Mac users today began experiencing unexpected issues …



from Techmeme https://ift.tt/3lv6Fkc
via A.I .Kung Fu

Join the Final Lap of the 2020 DeepRacer League at AWS re:Invent 2020

AWS DeepRacer is the fastest way to get rolling with machine learning (ML). It’s a fully autonomous 1/18th scale race car driven by reinforcement learning, a 3D racing simulator, and a global racing league. Throughout 2020, tens of thousands of developers honed their ML skills and competed in the League’s virtual circuit via the AWS DeepRacer console and 14 AWS Summit online events.

The AWS DeepRacer League’s 2020 season is nearing the final lap with the Championship at AWS re:Invent 2020. From November 10 through December 15, there are three ways to join in the racing fun: learn how to develop a competitive reinforcement learning model through our sessions, enter and compete in the racing action for a chance to win prizes, and watch to cheer on other developers as they race for the cup. More than 100 racers have already qualified for the Championship Cup, but there is still time to compete. Log in today to qualify for a chance to win the Championship Cup by entering the Wildcard round, offering the top 5 racers spots in the Knockout Rounds. Starting December 1, it’s time for the Knockout Rounds to start – and for racers to compete all the way to the checkered flag and the Championship Cup. The Grand Prize winner will receive a choice of either 10,000 USD AWS promotional credits and a chance to win an expenses-paid trip to an F1 Grand Prix in the upcoming 2021 season or a Coursera online Machine Learning degree scholarship with a maximum value of up to 20,000 USD. See our AWS DeepRacer 2020 Championships Official Rules for more details.

Watch the latest episode of DRTV news to learn more about how the Championship at AWS re:Invent 2020 will work.

Congratulations to our 2020 AWS re:Invent Championship Finalists!

Thanks to the thousands of developers who competed in the 2020 AWS DeepRacer League. Below is the list of our Virtual and Summit Online Circuit winners who qualified for the Championship at AWS re:Invent 2020.

Last chance for the Championship: Enter the Wildcard

Are you yet to qualify for the Championship Cup this season? Are you brand new to the league and want to take a shot at the competition? Well, you have one last chance to qualify with the Wildcard. Through November, the open-play wildcard race will be open. This race is a traditional virtual circuit style time trial race, taking place in the AWS DeepRacer console. Participants have until 11:59pm UTC November 30 (6:59pm EST, 3:59pm PST) to submit their fastest model. The top five competitors from the wildcard race will advance to the Championship Cup knockout.

Don’t worry if you don’t advance to the next round. There are chances for developers of all skill levels to compete and win at AWS re:Invent, including the AWS DeepRacer League open racing and special live virtual races. Visit our DeepRacer page for complete race schedule and additional details.

Here’s an overview of how the Championships are organized and how many racers participate in each round from qualifying through to the Grand Prix Finale.

Round 1: Live Group Knockouts

On December 1, racers need to be ready for anything in the championships, no matter what road blocks they may come across. In Round 1, competitors have the opportunity to participate in a brand-new live racing format on the console. Racers submit their best models and control maximum speed remotely from anywhere in the world, while their autonomous models attempt to navigate the track, complete with objects to avoid. They’ll have 3 minutes to try to achieve their single best lap to top the leaderboard. Racers will be split into eight groups based on their time zone, with start order determined by the warmup round (with the fastest racers from round 1 getting to go last in their group). The top four times in each group will advance to our bracket round. Tune in to AWS DeepRacer TV  throughout AWS re:Invent to catch the championship action. 

Round 2: Bracket Elimination

The top 32 remaining competitors will be placed into a single elimination bracket, where they face off against one another in a head-to-head format in a five-lap race. Head-to-head virtual matchups will proceed until eight racers remain. Results will be released on the AWS DeepRacer League page and in the console. 

Round 3: Grand Prix Finale

The final race will take place before the closing keynote on December 15 as an eight-person virtual Grand Prix. Similar to the F1 ProAm in May, our eight finalists will submit their model on the console and the AWS DeepRacer team will run the Grand Prix, where the eight racers simultaneously face off on the track in simulation, to complete five laps. The first car to successfully complete 5 laps and cross the finish line will be crowned the 2020 AWS DeepRacer Champion and officially announced at the closing keynote.

More Options for your ML Journey

If you’re ready to get over the starting line on your ML journey, AWS DeepRacer re:Invent sessions are the best place to learn ML fast.  In 2020, we have not one, not two, but three levels of ML content for aspiring developers to go from zero to hero in no time! Register now for AWS re:Invent to learn more about session schedules when they become available.

  • Get rolling with Machine Learning on AWS DeepRacer (200L). Get hands-on with AWS DeepRacer, including exciting announcements and enhancements coming to the league in 2021. Learn about the basics of machine learning and reinforcement learning (a machine learning technique ideal for autonomous driving). In this session, you can build a reinforcement learning model and submit that model to the AWS DeepRacer League for a chance to win prizes and glory.
  • Shift your Machine Learning model into overdrive with AWS DeepRacer analysis tools (300L). Make your way from the middle of the pack to the top of the AWS DeepRacer podium! This session extends your machine learning skills by exploring how human analysis of reinforcement learning through logs will improve your performance through trend identification and optimization to better prepare for new racing divisions coming to the league in 2021.
  • Replicate AWS DeepRacer architecture to master the track with SageMaker Notebooks (400L). Complete the final lap on your machine learning journey by demystifying the underlying architecture of AWS DeepRacer using Amazon SageMaker, AWS RoboMaker, and Amazon Kinesis Video Streams. Dive into SageMaker notebooks to learn how others have applied the skills acquired through AWS DeepRacer to real-world use cases and how you can apply your reinforcement learning models to relevant use cases.

You can take all the courses live during re:Invent or learn at your own speed on-demand. It’s up to you.  Visit the DeepRacer page at AWS re:Invent to register and find out more on when sessions will be available.

As you can see, there are many opportunities to up-level your ML skills, join in the racing action and cheer on developers as they go for the Championship Cup. Watch this page for schedule and video updates all through AWS re:Invent 2020!

 


About the Author

Dan McCorriston is a Senior Product Marketing Manager for AWS Machine Learning. He is passionate about technology, collaborating with developers, and creating new methods of expanding technology education. Out of the office he likes to hike, cook and spend time with his family.



from AWS Machine Learning Blog https://ift.tt/36yqVer
via A.I .Kung Fu

Nintendo reminds everyone that Switch is in its sales prime

World of Tanks Blitz plays at 30 frames per second and 720p on the Switch handheld and 1080p on the TV.
Nintendo Switch sales hit 735,000 units in October, which is the second-highest October total ever in the United States.Read More

from VentureBeat https://ift.tt/3pruxYi
via A.I .Kung Fu

Amazon Beefs Up AI in Alexa, and Gets Charged by EU With Unfair Practices 

By John P. Desmond, AI Trends Editor 

AI took center stage in recently-announced updates to the Alexa virtual voice assistant, and in the charges this week from the European Commission that Amazon is breaking EU competition rules.  

During Amazon’s Alexa Live event held in July, the company announced a major update to Alexa’s developer toolkit that brings AI improvements. Since launching in 2014, Amazon’s voice assistant has shipped hundreds of millions of units, which are targeted by a sizable developer community offering voice apps, called Skills, that extend the Alexa default feature set. Just as the Android and iOS large selections of third party applications differentiate those operating systems, so Skill plays an important role in Amazon’s growth strategy for Alexa, according to a recent account in siliconAngle.  

Amazon added deep learning models for natural language understanding that the company said will enable Skills to recognize users’ voice commands with 15% higher accuracy on average. Current Skills users can use the new technology without any modifications, according to Amazon.  

Amazon also enhanced the voice assistant platform for more specific uses that are emerging as Alexa is added to more devices, including smartphones, wearables and smart displays. A new tool, Apps for Alexa, allows developers of mobile apps to enable customer control in a hands-free way, such as with the Echo Buds wireless earbuds. Another tool enables developers to allow purchases such as food delivery orders on Alexa-powered smart screens, such as the Echo Show smart display.  

Developers of Skills for the Echo Bud are getting a new capability called “skill resumption,” which allows Skills to automatically “resume” at opportune times. For example, if a consumer uses Echo Buds to hail an Uber car, Uber’s Alexa skill can automatically notify them when their ride arrives without requiring a manual invocation.  

Skills have momentum; Amazon announced that customer engagement with Alexa Skills nearly doubled over the past year.   

AZ1 Edge Processor Can Perform On-Device Processing, a Privacy Win 

Alexa is also moving to the edge with its own chip in smart home edge devices. The Echo devices are using the company’s AZ1 Neural Edge processor, which consumes 20x less power, 85% less memory and features double the speech processing power as predecessors, according to an account from ZDNet  

Rohit Prasad, VP and head scientist for Alexa AI, Amazon

The AZ1 in concert with Amazon’s AI advances is aimed at making the Echo more aware of its surroundings. Dave Limp, senior vice president of devices and services at Amazon, stated that the new Echo devices are designed to make “moments count.” The new versions of Alexa will be able to learn from humans by asking follow-up questions when Alexa has a gap in its understanding, according to Rohit Prasad, VP and head scientist for Alexa AI at Amazon, in a presentation on new Alexa features at the virtual event. New versions will also use deep learning space parsers to understand gaps and extract new concepts, will show more natural conversation, and will engage a followup mode when interacting with humans.  

Alexa can use visual and acoustic cues to determine the best action to take. “This natural turn-taking allows people to interact with Alexa at their own pace,” Prasad stated. 

The new AI foundation technology for Alexa’s ability to interpret context and adjust how to speak to you, has been in development for years at Amazon, Prasad said.   

The AZ1 edge processor is making Alexa faster. “The processor on the device is key with a fast-paced conversation,” stated Prasad. “The neural accelerator on the device makes decisions much faster.”  

Alexa for Business, rolled out over a year ago, has been adding features via AWS. Skill Blueprints were launched in April 2018 as a way to allow anyone to create skills and publish them to the Skills Stores with a 2019 update.   

Prasad did not outline the roadmap for Alexa for Business, but did say Echo’s new capabilities would apply to office settings as well as to yet-to-be-determined use cases. “There’s the potential to be able to teach Alexa anything in principle,” Prasad stated.  

The AZ1 processor, built with Taiwanese semiconductor company MediaTek, will speed Alexa’s response to queries and commands by hundreds of milliseconds per response, according to an account in The Verge. That allows for on-device neural speech recognition.  

Amazon’s preexisting products without the AZ1 send both the audio and its corresponding interaction to the cloud to be processed and back. Only the Echo and Echo Show 10 currently have the on-device memory needed to support Amazon’s new all-neural speech models. Given that the data is stored and deleted locally, the edge computing is seen as a privacy win.  

European Commission Charging Amazon with Unfair Competition  

All this smart processing is getting Amazon into trouble in Europe, with the European Commission this week charging the company with gaining an illegal advantage in the European marketplace. This was based on the use by Amazon of sales data of independent retailers selling through its site, data not available to other companies in the European market, and which Amazon uses to sell more of its most profitable products.  

Margrethe Vestager, Executive Vice President, European Commission

Margrethe Vestager, the commission’s executive vice-president, stated that the commission’s preliminary conclusion was that Amazon used “big data” to illegally distort competition in France and Germany, the biggest online retail markets in Europe, according to an account in The Guardian. The investigators will examine whether Amazon set rules on its platform to benefit its own offers and those of independent retailers who use Amazon’s logistics and delivery services.   

We do not take issue with the success of Amazon or its size. Our concern is very specific business contacts which appear to distort genuine competition,” Vestager stated. The EU team has since July analyzed a data sample of more than 18 million transactions on more than 100 million products.   

The commission determined that real time business data relating to independent retailers on the site was being fed into an algorithm used by Amazon’s own retail business. “It is based on these algorithms that Amazon decides what new products to launch, the price of each individual offer, the management of inventories and the choice of the best supplier for a product,” Vestager stated. “We therefore come to the preliminary conclusion that the use of this data allows Amazon to focus on the sale of the best-selling products, and this marginalizes third party sellers and caps their ability to grow.”  

Amazon faces a possible fine of up to 10% of its annual worldwide revenue. That could amount to as much as $28 billion, based on its 2019 earnings.   

In a statement Amazon said it disagreed with the findings. “There are more than 150,000 European businesses selling through our stores that generate tens of billions of euros in revenues annually,” the company stated. 

Read the source articles in siliconAngleZDNetThe Verge and The Guardian. 



from AI Trends https://ift.tt/3pqFE43
via A.I .Kung Fu