openai vision help
openai vision void
digest https://platform.openai.com/docs/guides/vision
GPT-4 with Vision… GPT-4V or gpt-4-vision-preview in the API… model… take in images and answer questions about them
The Chat Completions API, unlike the Assistants API, is not stateful….
openai complete "describe Vancouver in one sentence."
input = list of images ✅ output = text ✅ use URL ✅ detail: low, high, auto 🌟 ✅
bash ✅ python ✅
openai vision "prompt" \
[auto|low|high,prefix=<prefix>] \
<.|object-name> \
[--verbose 1] \
[--count <count>] \
[--extension <jpg|extension>]
openai vision help
find a Vancouver-Watching ingest,
vanwatch init
vanwatch list area=vancouver,ingest,published
complete_object
▶️ complete ◀️ list of images ✅
@select $(vanwatch list area=vancouver,ingest,published \
--log 0 \
--count 1 \
--offset 0); open .
ButeNorthDavie-inference.jpg
@select $(vanwatch list area=vancouver,ingest,published \
--log 0 \
--count 1 \
--offset 0); ls Bute*
ls -1 *.jpg | grep Davie
ls -1 *.jpg | grep Davie | grep Bute | grep -v inference
prefix ▶️ options ✅
~inference ◀️ always enforced ✅
openai vision help
openai vision \
"you are a police offier, what do you see in these images?" \
dryrun \
Davie,Bute \
$(vanwatch list area=vancouver,ingest,published \
--log 0 \
--count 1 \
--offset 0) \
--max_count 10 \
--verbose 1
openai vision \
"you are a police offier, what do you see in these images?" \
- Davie,Bute \
$(vanwatch list area=vancouver,ingest,published \
--log 0 \
--count 1 \
--offset 0) \
--max_count 10 \
--verbose 1
notebooks/vision.ipynb
✅
Next: validation — completed at the gallery.