In this era of automation, significant advantages can be gained by automatically generating verification and validation sequences from natural language text using artificial intelligence (AI) based sequence detection techniques, and then using those sequences in C/UVM code. This article talks about the current state of development in this area and gives ideas about how you can implement your own solution to achieve true specification-driven software development.
With the continuing advancement of AI and machine learning (ML), their application has increased in diverse high technology fields, such as face detection, face wrapping, object detection, goal classifiers, language translation, chatbots, spam detection, and data scrapping. Through AI, rule-based applications have taken a back seat as many algorithms have been invented that are capable of defining their own rules or creating classifiers like linear regression, logistic regression, trees, and SVM. Along with the algorithms, what is really important is the data that is used to train the model of these algorithms. In EDA, the application of ML or deep learning techniques enables modeling and simulation with unprecedented levels of insight. Hence one can expect greater efficiency and accuracy from design tools, which will translate into shorter turnaround times and greater flexibility in analysis and simulation coverage, and thus, encourage broader automation. Machine learning can help identify patterns to optimize designs, allowing designers and testers to model more complex designs in a simpler way and in less time. This will make designs more efficient in multiple aspects of automation and design generation as well as in verification and validation by using assertions or by generating sequences for special registers that will provide full test reporting to accelerate the entire design process. Also, ML-generated models can provide better feedback to the designers or engineers by indicating whether or not the design would live up to the expected performance at each step of the development process.
USE OF NATURAL LANGUAGE PROCESSING (NLP)
The use of natural language processing (NLP) for the manipulation of natural language text to determine and capture sequences is now possible using deep learning techniques. Machine translation, speech recognition, conversational chatbots, and POS (parts of speech) taggers have been the most popular NLP applications.
NLP is basically a vast field of research broadly used for determining speech or text written in communication languages. NLP grew out of the field of linguistics and has succeeded above expectations so far and promises to achieve even more in the near future. It can be used for the development of deep learning models for classification of text, translation of text, and more. NLP is sometimes referred to as “linguistic science” in order to include both classical linguistics as well as modern statistical methods. In ML, we are concerned more with the tools and methods from the field of NLP, which is basically the automatic processing of human understandable languages.
Deep learning is a subfield of machine learning that focuses on the algorithms inspired by the structure and function of the brain. These techniques have proved useful in solving challenging natural language processing problems. Several neural network architectures have a great impact in addressing natural language processing tasks.
Figure 1 – Comparison graph
USE OF RECURRENT NEURAL NETWORKS (RNN)
A neural network is nothing but a series of algorithms that attempts to recognize basic relationships in a set of data similar to the way the human brain operates. A recurrent neural network (RNN) is a type of neural network designed to deal with real world problems, like machine translation, chatbots, etc. The input and output are connected as the previous step’s output is fed in as the current step’s input to predict the new text. The RNN comes with a hidden state that basically remembers some information about a sequence with the help of built-in memory. It uses the same parameters recursively for all inputs and performs the same task on them as well as the hidden layers to produce the output, thus reducing the complexity of those parameters.
RNN is used in NLP to carry pertinent information from one input item in a series; i.e., it can take a series input without any predefined limit on size. Basically, in RNN every word gets transformed into machine readable vectors. This sequence of vectors are processed one by one. The processing is done by passing the hidden state to the very next step of the sequence, and this hidden state acts as a memory of the neural network which holds the previous data seen by the network itself. In other words, all the inputs are related to each other. This makes it applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. See Figure 2 below.
Figure 2 – A typical RNN unit with a recurrent hidden layer
RNN BASED LONG SHORT TERM MEMORY (LSTM) USAGE
As RNN cannot process very long or large sequences, long short term memory (LSTM) is used. LSTM is a modified version of the RNN that makes it simpler to recall past data in memory, helping retain memory of long sequences.
This type of RNN is very well suited to classify, process, and predict time series, given the lags of unknown duration. It trains the model using back propagation. At every time step, attention is given to those words that help predict the most parts of the output. This attention weight is calculated using an algorithm and formula for each time step, and then these attention weights are multiplied by each hidden state of the respective time step to form attention values. This is achieved using the attention mechanism that focuses on a specific part of the input sequence while predicting the specific part of the output sequence that will ultimately enable easier learning and a higher quality of prediction. A final context vector is built by the dot product of all attention values, which is also known as stacking. This context vector enables the decoder to focus on those specific parts of the input sequence while predicting its output.
Let the context vectors be c1, c2, c3,…with h1, h2, h3,… be output vectors of the encoder and α1, α2, α3,… be their attention weights. So, the dot product would be:
The output is then fed word by word to the decoder of the LSTM network, and at each time step, the context vector is used to produce the appropriate output. Thus, the attention mechanism located between the encoder and the decoder enables improved performance.
Over the years we have come across and understood a lot of different ways people use registers associated with the hardware/software interface (HSI). This information has helped us understand the kind of sequences users create to program and test their IPs.
As in an ideal world, users would rather use plain and simple English text to describe the sequences, rather than encode in various languages. In any case, this is being done in the original specification. Natural, plain English is still the hallmark of specifications in today’s system design, and a lot of useful and actionable information is embedded in the natural language specification text.
Numerous translations happen when the architect/designer creates a specification in the English language and the hardware/software/firmware engineer must manually convert them into code. With this new methodology, the specification writer’s original intent is converted into real, usable code. They need to simply describe the sequences in a natural language, just as they would write when communicating to members of their team.
Basically, an RNN based network is used in this context to read the input text (i.e., the sequence description text) word by word. The order of the sentence formation is maintained so as to learn the meaning of the input text. Each word is processed through an RNN unit, which can either be a LSTM or a gated recurrent unit (GRU). LSTM and GRU are types of RNN having their own essential rules. LSTM has the capability of retaining the maximum information of long sequences, and GRU has the ability to forget irrelevant information and retain only the important information with regard to the context. These units process all the words one by one and, hence, generate output information as its resultant. A bidirectional layer is also used, which reads the input text from both directions (i.e., both forward and backward), improving the performance of the model on this sequence classification.
In the field of AI, everything is all about numbers, vectors, matrices, and statistics, so one can say that a model can feed only numbers and it can infer only probabilities, and the maximum probability is always chosen. Our model follows the same logic to predict the most probable output. We have also focused on embedding a neural network that accepts only numbers rather than string values. Thus it basically treats each input text word by making their vector forms, which ultimately represents each word with some fixed size vector of numbers. Apart from embedding, the attention algorithm has also been used, which helps predict the more likely expected outputs; i.e., the most probable expected output sequence. This attention layer helps in giving words or vectors more weight by giving them scores and comparing them with the output during the training of the model itself. This weighting helps predict more accurately the expected outputs for desired input sentences. Ultimately everything comes down to the part of the dataset on which the model gets trained. The data that is fed to the model for training should be as good as possible; i.e., the dataset should be apt to expect a greater performance from the model.
We have carefully created our corpus by getting references from actual register programming sequences used in the EDA industry. We have introduced a wide variety of cases, including cases with augmented data or noise. This robust model provides great accuracy in covering almost all scenarios of sequences that can be used by a designer for the description of an input text sequence.
This model has been deployed using a Django Framework to maintain communication between the model and iDSNG (our spec entry tool) through various APIs handling multiple requests at a single time. The communication network illustrated in Figure 3 represents the interaction of iDSNG with the model through the APIs.
Figure 3 – Communication framework
Figure 4 depicts the results of the input text; i.e., the sequence description below the column, “description,” and its predicted output by the model in the column, “command”.
The specification used in the example (Figure 4) basically consists of the register, “dma_controller,” having the field “hsel” and the register, “Sbcs” having the field “Sberror”.
Figure 4 – Description of sequence and the predicted output sequence
Issues remain with words which are unknown in the vocabulary/dictionary of the model. For such words, the model may produce unexpected outputs as those are out of context for the model.
In some cases, failure in data interpretation occurs because of insufficient data input given as a description.
The model may produce unexpected results if not trained with enough accurate data; i.e., the training dataset needs to be large and accurate.
Computational power, inference time, etc., constitutes a hindrance for viable usage with a larger model architecture.
We have been able to handle a wide variety of cases with an accuracy of more than 90% and with no delay in inference time.
This model can effectively handle noise in the input text and thus give attention to only the relevant part of the text that has any influence on the output sequence generating correct output sequences.
We have achieved our results by ensuring a sufficient and correct amount of dataset to train the model, improving the inference time by reducing the complexity of the model architecture, and by work-ing around other issues we faced along the way.
- IDS NextGen – Comprehensive SoC/IP Specification and Code Generation Tool
- Register Automation using Machine Learning
- RISCV Debug Specification Tutorial
- Intel® 64 and IA-32 ArchitecturesSoftware Developer’s Manual
- ADV7511 Programming Guide
by Asif Ahmad and Abhishek Chauhan