Transformer Technology in Government: A Look at the Benefits and Challenges
By Nathan Manzotti, Director, Data & Analytics Center of Excellence, Technology Transformation Services (TTS)
Artificial Intelligence (AI) is rapidly advancing and many new products and services that use AI have the potential to revolutionize our lives. One of the most popular and recent services are Generative Pre-Trained Transformers.
Generative Pre-Trained Transformers are trained on extremely large corpuses of text, parallelized across hundreds of Graphical Processing Units (GPUs) to process the entire input at once, creating billions of parameters and weighted relationships between words, phrases, and concepts. Processing all the input data at once reduces training times compared to previous popular deep learning models like Recurrent Neural Networks (RNNs). For example, a text-based music generator was created that fine-tunes stable diffusion (a text-to-image transformer) to output images of spectrograms that are then converted to audio clips.
The government can leverage these technological advances to provide better service to the public. Creating these models from scratch is extremely difficult; however, fine-tuning existing models is a clear choice if an agency wants customization using its data. Customizing includes selecting a model, assembling a dataset, fine-tuning a model on that data set, and integrating that model with a product or user interface. I’ve recently seen over 150 pre-trained transformer models that would be candidates for fine-tuning and that integrate easily with common machine learning tools.
It is important to note that controlling the output of a transformer model is an iterative process and difficult to reconcile with the goal of achieving high task performance.
It is important to consider risks such as bias in the dataset used to train the existing model. Evaluating for performance, intrinsic bias, and appropriate-use cases is critical when selecting an existing model. There are several methods for detecting bias in AI, including:
- Data Auditing: Examines the data used to train the AI system and identifies any patterns of bias or skew.
- Fairness Metrics: Uses statistical measures to determine if the AI system is treating different groups fairly.
- Counterfactual Analysis: Examines the AI system’s predictions for different input values and determines if the system is biased.
- User Testing: Tests the AI system with a diverse group of users to identify biases or discriminatory behavior.
- Interactive Bias Detection: Uses interactive visualization and explanation tools to aid humans in understanding and detecting bias in AI systems.
- Note: No single method can guarantee the detection of all biases. A combination of methods is typically used.
There is also federal guidance for agencies with respect to AI:
- The National Institute of Standards and Technology (NIST) has released the “NIST AI Framework: A Risk Management Framework for Artificial Intelligence,” which provides guidance on managing risks associated with AI systems.
- The Office of Management and Budget (OMB) released a memorandum that provides guidelines for federal agencies to follow when developing and deploying AI systems. The memorandum emphasizes the importance of transparency, fairness, and safety in AI systems.
- The National Science and Technology Council (NSTC) has released a “National Artificial Intelligence Research and Development Strategic Plan,” which outlines a strategic vision and plan for the government’s AI research and development efforts.
- The National Security Commission on Artificial Intelligence (NSCAI) released a report recommending how the federal government can maintain its leadership in artificial intelligence and machine learning.
- The Defense Innovation Board (DIB) released “Principles for the Use of AI by the Department of Defense” to ensure the ethical, responsible, and accountable use of AI in the defense sector.
Controlling the outputs of a transformer model can be achieved through several methods. Intrinsically by fine-tuning the model, it is adjusted to produce more relevant results for the specific task or application at hand. Still, constraints around inputs and outputs are important. Agencies should reference available federal guidance on these topics, such as Executive Order 13960 on Promoting the Use of Trustworthy AI in the Federal Government.
Finally, controlling the inputs that a transformer model is fed is also an important aspect of controlling its outputs. Basic input filtering at the application layer can also be used to control what prompts get fed to the model.
It is important to note that controlling the output of a transformer model is an iterative process and difficult to reconcile with the goal of achieving high task performance. The best approach is to try different methods and evaluate the results by using metrics appropriate to the task and interpreting the output.
The GSA CoEs assist federal agencies with their technology modernization initiatives. CoEs partner with the industry to accelerate IT modernization across multiple technology domains. If you or your organization are interested in working with the CoEs, please reach out to [email protected] or check us out at coe.gsa.gov. Also, if you are a member of the federal government, please consider joining the AI Community of Practice.