Artificial Intelligence Infrastructure-as-Code Generator.
- Description
- Use Cases and Example Prompts
- Instructions
- Example Output
- Troubleshooting
- Support Channels
- License
aiac is a command line tool to generate IaC (Infrastructure as Code) templates, configurations, utilities, queries and more
via OpenAI's API. The CLI allows you to ask the model to generate templates
for different scenarios (e.g. "get terraform for AWS EC2"). It will make the
request, and store the resulting code to a file, or simply print it to standard
output. By default, aiac uses the same model used by ChatGPT, but allows using
different models.
aiac get terraform for a highly available eksaiac get pulumi golang for an s3 with sns notificationaiac get cloudformation for a neptundb
aiac get dockerfile for a secured nginxaiac get k8s manifest for a mongodb deployment
aiac get jenkins pipeline for building nodejsaiac get github action that plans and applies terraform and sends a slack notification
aiac get opa policy that enforces readiness probe at k8s deployments
aiac get python code that scans all open ports in my networkaiac get bash script that kills all active terminal sessions
aiac get kubectl that gets ExternalIPs of all nodesaiac get awscli that lists instances with public IP address and Name
aiac get mongo query that aggregates all documents by created dateaiac get elastic query that applies a condition on a value greater than some value in aggregationaiac get sql query that counts the appearances of each row in one table in another table based on an id column
You will need to provide an OpenAI API key in order for aiac to work. Refer to
OpenAI's pricing model for
more information. As of this writing, you get 5ドル in free credits upon signing up,
but generally speaking, this is a paid API.
Via brew:
brew install gofireflyio/aiac/aiac
Using docker:
docker pull ghcr.io/gofireflyio/aiac
Using go install:
go install github.com/gofireflyio/aiac/v3@latest
Alternatively, clone the repository and build from source:
git clone https://github.com/gofireflyio/aiac.git
go build
- Create your OpenAI API key here.
- Click "Create new secret key" and copy it.
- Provide the API key via the
OPENAI_API_KEYenvironment variable or via the--api-keycommand line flag.
By default, aiac prints the extracted code to standard output and opens an interactive shell that allows retrying requests, enabling chat mode (for chat models), saving output to files, copying code to clipboard, and more:
aiac get terraform for AWS EC2
You can ask it to also store the code to a specific file with a flag:
aiac get terraform for eks --output-file=eks.tf
You can use a flag to save the full Markdown output as well:
aiac get terraform for eks --output-file=eks.tf --readme-file=eks.md
If you prefer aiac to print the full Markdown output to standard output rather
than the extracted code, use the -f or --full flag:
aiac get terraform for eks -f
You can use aiac in non-interactive mode, simply printing the generated code
to standard output, and optionally saving it to files with the above flags,
by providing the -q or --quiet flag:
aiac get terraform for eks -q
In quiet mode, you can also send the resulting code to the clipboard by
providing the --clipboard flag:
aiac get terraform for eks -q --clipboard
Note that aiac will not exit in this case until the contents of the clipboard changes. This is due to the mechanics of the clipboard.
By default, aiac uses the gpt-3.5-turbo chat model, but other models are supported, including gpt-4. You can list all supported models:
aiac list-models
To generate code with a different model, provide the --model flag:
aiac get terraform for eks --model="text-davinci-003"
All the same instructions apply, except you execute a docker image:
docker run \
-it \
-e OPENAI_API_KEY=[PUT YOUR KEY HERE] \
ghcr.io/gofireflyio/aiac get terraform for ec2
You can use aiac as a library:
package main import ( "context" "os" "github.com/gofireflyio/aiac/v3/libaiac" ) func main() { client := libaiac.NewClient(os.Getenv("OPENAI_API_KEY")) ctx := context.TODO() // use the model-agnostic wrapper res, err := client.GenerateCode( ctx, libaiac.ModelTextDaVinci3, "generate terraform for ec2", ) if err != nil { fmt.Fprintf(os.Stderr, "Failed generating code: %s\n", err) os.Exit(1) } fmt.Fprintln(os.Stdout, res.Code) // use the completion API (for completion-only models) res, err = client.Complete( ctx, libaiac.ModelTextDaVinci3, "generate terraform for ec2", ) // converse via a chat model chat := client.Chat(libaiac.ModelGPT35Turbo) res, err = chat.Send(ctx, "generate terraform for eks") res, err = chat.Send(ctx, "region must be eu-central-1") }
Command line prompt:
aiac get dockerfile for nodejs with comments
Output:
FROM node:latest # Create app directory WORKDIR /usr/src/app # Install app dependencies # A wildcard is used to ensure both package.json AND package-lock.json are copied # where available (npm@5+) COPY package*.json ./ RUN npm install # If you are building your code for production # RUN npm ci --only=production # Bundle app source COPY . . EXPOSE 8080 CMD [ "node", "index.js" ]
aiac is a command line client to OpenAI's API. Most errors that you are likely
to encounter are coming from this API. Some common errors you may encounter are:
-
"[insufficient_quota] You exceeded your current quota, please check your plan and billing details": As described in the Instructions section, OpenAI is a paid API with a certain amount of free credits given. This error means you have exceeded your quota, whether free or paid. You will need to top up to continue usage.
-
"[tokens] Rate limit reached...": The OpenAI API employs rate limiting as described here.
aiaconly performs individual requests and cannot workaround or prevent these rate limits. If you are usingaiacin programmatically, you will have to implement throttling yourself. See here for tips.
We have two main channels for supporting AIaC:
- Slack community: general user support and engagement.
- GitHub Issues: bug reports and enhancement requests.
This code is published under the terms of the Apache License 2.0.