Categories
Podcast

Enabling AI Applications through Datacenter Connectivity with Nvidia

AI applications typically require massive volumes of data and multiple devices within the datacenter

Categories
Podcast

Using ML to Optimize ML with Luis Ceze of OctoML

Training and optimizing a machine learning model takes a lot of compute resources, but what if we used ML to optimize ML? Luis Ceze created Apache Tensor Virtual Machine (TVM) to optimize ML models and has now founded a company, OctoML, to leverage this technology

Categories
Podcast

AI Needs Non-Traditional Storage Solutions with James Coomer of DDN

AI applications have large data volumes with lots of clients and conventional storage systems aren’t a good fit

Categories
Podcast

Balancing Data Security and Data Processing with Arti Raman of Titaniam

AI and analytics needs access to massive volumes of data, but we are constantly reminded of the importance of securing data

Categories
Podcast

Using AI to Assess Unstructured Data with Concentric

Most organizations have a vast amount of so-called unstructured data, and this poses a major risk for operations

Categories
Podcast

AI and Analytics Are Driving a New Kind of Storage with Brad King of Scality

Big data really wasn’t all that big until modern analytics and machine learning applications appeared, but now storage solutions have to scale capacity and performance like never before

Categories
Podcast

Building Transparency and Fighting Bias in AI with Ayodele Odubela

When it comes to AI, it’s garbage in, garbage out: A model is only as good as the data used

Categories
Podcast

Algorithmic Bias and Subjective Decision Making with Alf Rehn

Biases can creep into any data set, and these can cause trouble when this data is used to train an AI model

Categories
Podcast

Improving AI with Transfer Learning Featuring Frederic Van Haren

Productive use of AI requires the application of existing models to new applications through a process called transfer learning

Categories
Podcast

Moving AI To the Edge with BrainChip

BrainChip is developing a novel ultra low power “neuromorphic” AI processor that can be embedded in literally any electronic device, rather than centralizing learning in high performance processors