Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
Most random numbers today are generated using pseudo-random number generators, a mathematical approach to simulate randomness using non-random components. Achieving an efficient and scalable true random number generator, in which we use the randomness provided by physics, presents a number of technical challenges and opportunities.
The algorithms research in COINFLIPS focuses on three particular challenges in how we can use the true random numbers generated by our coinflips devices.
Generation of the Random Number that YOU Want
Today’s RNGs usually give us a uniformly distributed random number in which every outcome is equally likely. This is easy to generate by making each bit of a binary random number a random 50/50 coinflip (right). This creates a number uniformly distributed in the range of numbers encoded by your digit.
For many applications, we do not want a uniform random number. Perhaps we’re looking at the outcome of a random diffusion process (Gaussian) or the movement of financial instruments (log-normal), or the distribution of radioactive particle delays (exponential). In these cases, when we sample a uniform random number, we need to convert the sample to the desired distribution. This can be computationally expensive for some distributions.
In COINFLIPS, we are developing algorithm techniques to use biased (not 50/50) coinflips to sample from non-uniform distributions. By combining biased coins in clever ways, we can directly sample from the distributions required in different applications, avoiding the computationally expensive transformation steps.
In recent years, it has become increasingly appreciated that many computationally hard problems, such as the Traveling Salesman Problem, can be approximated by using random sampling. While these approximate answers may not be the optimal answer to the problem, they often can provide a very good answer at a much lower computational cost.
In COINFLIPS, we have recognized that the sampling methods accelerated by stochastic devices can be used to directly perform the sampling steps of well-known graph approximation algorithms. In the animation to the left, we demonstrate how coin flip devices combined with artificial neurons can be used to sample the “maximum cut” of a graph, that is, finding a way to divide a graph into two pieces that has the most connections between sides.
Hardware Implementation of Bayesian Neural Networks
A significant challenge of modern artificial neural networks is teaching them to not only provide the right answers, but also provide a confidence in the answers they provide. While today’s machine learning techniques allow us to train neural nets to achieve tremendous accuracies, encoding the uncertainty distributions into the network remains elusive. COINFLIPS is investigating strategies to accelerate the use of Bayesian Neural Networks by making the computational costs of sampling large neural networks with strategies similar to those for which we are using in the sampling algorithms above.
Acknowledgements
Copyright © 2023 Funded under the DOE National Laboratory Announcement Microelectronics Co-Design Research. SAND2022-3825 C and SAND2022-16196 O
We acknowledge support from the DOE Office of Science's Advanced Scientific Computing Research (ASCR) program and Basic Energy Sciences (BES) programs, Microelectronics Co-Design project COINFLIPS.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.