Composer is a new feature for Wingman, think of it as an advanced chat view that is capable of thinking through tasks. Composer currently works in two ways:
You can quick launch Composer:
Windows ctrl+i
macOS cmd+i
Composer will use the configured Chat Model in your settings. Make sure you also adjust the Chat Max Tokens to match your model's maximum output for the best results.
Check out the video above for tips on using composer
Composer is an advanced feature with complex prompts. It is important to note that the quality of the output is dependent on the quality of the input. The more specific you are with your input, the better the output will be. This is because Wingman will use your input to find the best files to work with.
Currently Ollama as a provider is in an experimental phase. There are a number of challenges with supporting local models:
We have only tested using qwen2.5-coder
Ollama quality will improve over time, but due to prompt complexity and local model size (3b, 7b, etc), it may not always be perfect.
Automatic context means that Wingman will leverage similar logic to how chat functions. In this mode it means you did not provide any specific files to composer, in which case it will use your input to find files to work with. This currently uses vector search along with a reranker operation in which it limits itself to the best 5 files.
Specific context means you provide files to composer directly using @filename in the chat input, this will allow you to search files across your workspace and target a group of files for specific changes. This can be very helpful when you know the general area you want to work in. Wingman will figure out which of those files apply to what you are asking it to do.
Composer supports uploading images either through the attachment icon or copy & paste, if the chat model configured supports images, it will provide the image along with the context above.
Composer will iterate over 3 unique phases for code generation:
Wingman will automatically choose "gpt-4o-mini" or "claude haiku 3.5" for reranking results during planning. Ollama does not have a safe default and will use the existing chat model.
Planning
In this phase Wingman will develop a plan of action based on your project details, input and code files it discovers or you provide. This plan of action provides unique perspectives on what you are asking. Think of it was refining your question.
Writing
In this phase Wingman will modify existing files or create new ones. Using the plan created in the previous step, it will focus solely on creating the best code it possibly can.
Reviewing
In the last phase Wingman will review the changes the code writer has made. It will decide if the changes met the objective or if there are any issues with the changes that need to be corrected. If there are issues, it will restart in the planning phase and use the existing code changes, and code review as context.
Both the Planning and Review phases are a work in progress. The goal is to expand these in the future so that Wingman performs similar to how you would when you approach a problem.
You are not limited to out of the box functionality when it comes to code writing. Simply add .wingmanrules to the root of your project:
You can add any additional instructions you want Wingman to follow. This can be helpful when you have specific coding standards or practices you want to enforce.
A new experimental feature has been added called "Validate". Validate is available and appears after a Composer run completes successfully. It allows you to configure a command to be run via a shell (command line on windows, and bash on macOS) with up to a 60s timeout. Wingman will monitor the command and once complete interprete the results. If its interpreted as a failure, Wingman will automatically submit the error as a new Composer message. The idea is for it to generate the fix for you as a Composer output and let you apply it.
This feature is manually activated and optional in its experimental stage.
The idea of this feature is to extend functionality into bigger integrations in the future and allow teams to hook up other commands that should run post-change, such as linting, etc.