After the second day of Apple Inc. AAPL’s 2016 Worldwide Developers Conference (WWDC), Global Equities Research’s Trip Chowdhry shared some insights into the company’s major Machine Learning and Deep Learning initiatives and architectures.
As per the report, Apple is using Recurrent Neural Network (RNN), utilizing Long Short Term Memory (LSTM) algorithms, which are built on Theano Framework and run on NVIDIA Corporation NVDA GPU's. This is, in fact, the deep learning system that will power the soon-to-be-released iOS 10 Messaging Platform, Siri Platform and a portion of Apple Maps, in addition to the SpotLight, Photos and Music apps/functionalities.
“The Training set, Validation Set and Test Set are all done on a hybrid of Apple Cloud” and Amazon.com, Inc. AMZN’s AWS, Chowdhry continued.
The expert went on to explain that, once the Deep Learning model reaches the testing error threshold, it gets pushed as an iOS update. Moreover, it should be noted that this process is continuous, “as the Neural Network model continues to refine itself based on various parameters,” he added.
Moving on to Apple's TF-IDF (Term Frequency - Inverse Document Frequency) algorithm, Chowdhry explicated that it is implemented in Caffe Deep Learning Framework.
“Apple is also opening Machine Learning/ Deep Learning based services to developers (the complexity is however hidden from the App Developer),” the note concluded.
Disclosure: Javier Hasse holds no positions in any of the securities mentioned above.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
date | ticker | name | Price Target | Upside/Downside | Recommendation | Firm |
---|
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.