Commit Graph

173 Commits

Author SHA1 Message Date
Suchintan
2eeca1c699 Add invalid response to log to help us better debug it (#4142)
Co-authored-by: Suchintan Singh <suchintan@skyvern.com>
2025-11-29 00:58:10 -05:00
Suchintan
d6aed0d0be Fix openai flex usage (#4141)
Co-authored-by: Suchintan Singh <suchintan@skyvern.com>
2025-11-29 00:15:38 -05:00
Mohamed Khalil
b7ecdaafb7 Add schema validation and default value filling for extraction results (#4063)
Co-authored-by: Suchintan <suchintan@users.noreply.github.com>
2025-11-28 15:10:41 +08:00
pedrohsdb
9785822e24 add magnifex openai flex config (#4106) 2025-11-26 11:21:38 -08:00
pedrohsdb
85fe9d69a5 prevent cached_content leak to non-extract prompts (#4089) 2025-11-25 08:51:37 -08:00
pedrohsdb
ae38b9096f fix(llm): prevent cached_content from being passed to non-Gemini models (#4086) 2025-11-24 18:24:45 -08:00
pedrohsdb
b52982d3c8 Pedro/thinking budget tests (#4072) 2025-11-21 21:22:12 -08:00
LawyZheng
7c189818d9 allow extract result to be non dict (#4069) 2025-11-22 10:36:43 +08:00
LawyZheng
6358b8b1d7 raise exception when non dict response (#4057) 2025-11-21 15:19:06 +08:00
pedrohsdb
d277882b72 handle list-wrapped llm responses (#4056) 2025-11-20 20:31:00 -08:00
pedrohsdb
46383c316d restore vertex cache credentials (#4050) 2025-11-20 14:54:53 -08:00
Celal Zamanoglu
5fc9435ef3 improve validations on parameter run ui (#4000)
Co-authored-by: Jonathan Dobson <jon.m.dobson@gmail.com>
2025-11-20 19:44:58 +03:00
pedrohsdb
bc6d7affd5 use explicit vertex credentials for cache manager (#4039) 2025-11-19 17:05:49 -08:00
LawyZheng
0b47dff89d fix cua engine (#4036) 2025-11-20 02:24:00 +08:00
pedrohsdb
f7e68141eb add vertex gemini 3 pro config (#4025) 2025-11-18 16:13:51 -08:00
pedrohsdb
d1c7c675cf cleaned up fallback router (#4010) 2025-11-17 12:08:19 -08:00
pedrohsdb
b7e28b075c parallelize goal check within task (#3997) 2025-11-13 17:18:32 -08:00
pedrohsdb
d88ca1ca27 Pedro/vertex cache minimal fix (#3981) 2025-11-12 10:40:52 -08:00
Marc Kelechava
ab162397cd Support Google Cloud Workload Identity for vertex models (#3956) 2025-11-10 15:56:57 -08:00
pedrohsdb
44528cbd38 Pedro/fix explicit caching vertex api (#3933) 2025-11-06 14:47:58 -08:00
pedrohsdb
0e0ae81693 Improve LLM error message when LLM is down (#3874) 2025-10-31 11:41:07 -07:00
pedrohsdb
46ee020b5d making gpt5 models have temp 1 (#3849) 2025-10-29 09:11:08 -07:00
pedrohsdb
5d7d668252 point flash and flash lite to stable (#3816) 2025-10-24 16:45:58 -07:00
greg niemeyer
de5a55bd66 add claude 4.5 haiku support (#3763)
Co-authored-by: Suchintan <suchintan@users.noreply.github.com>
2025-10-20 02:23:22 +00:00
pedrohsdb
bcb3414561 magnifex qwen3 featherless implementation (#3764) 2025-10-18 10:44:28 -07:00
Shuchang Zheng
770ddadc2f fix drop_params bug (#3756) 2025-10-17 12:00:34 -07:00
greg niemeyer
9b2bbda3c8 add support for claude sonnet 4.5 (#3692) 2025-10-12 12:57:52 -04:00
greg niemeyer
cb35d966ac fix claude output tokens (#3695) 2025-10-12 11:30:46 -04:00
Shuchang Zheng
ea92ca4c51 support openrouter qwen model (#3630) 2025-10-06 18:55:52 -07:00
pedrohsdb
bb48db6288 Updating Gemini flash pointers in registry and frontend to preview-09-2025 (#3584) 2025-10-01 15:41:14 -07:00
Jonathan Dobson
2196d46a47 Revert "Add endpoint for browser sessions history" (#3538) 2025-09-26 16:14:52 -04:00
Jonathan Dobson
1f585a184b Add endpoint for browser sessions history (#3537) 2025-09-26 16:07:27 -04:00
pedrohsdb
f40a2392c8 adding new gemini flash preview models (#3536) 2025-09-26 11:45:22 -07:00
pedrohsdb
dd9d4fb3a9 Pedro/prompt caching (#3531) 2025-09-25 15:04:54 -07:00
pedrohsdb
485b1e025e Pedro/thinking budget optimization (#3502) 2025-09-23 13:44:15 -07:00
LawyZheng
66b2004b70 Use gemini 2_5 flash lite for create_extract_action (#3429) 2025-09-13 16:22:57 +08:00
Shuchang Zheng
0e2aecc75d llm log (#3414) 2025-09-11 18:10:05 -07:00
LawyZheng
e0043d002c refactor gemini reasoning effor (#3292) 2025-08-25 23:42:42 +08:00
Shuchang Zheng
0a9b58956f gemini reasoning effort medium by default (#3282)
Co-authored-by: lawyzheng <lawyzheng1106@gmail.com>
2025-08-25 08:06:46 +00:00
Shuchang Zheng
b9470ffb44 fix cannot access local variable 'prompt_tokens' where it is not associated with a value (#3286) 2025-08-24 14:25:35 -07:00
Shuchang Zheng
5055daad00 GPT-5 Support + Better Logs (#3277) 2025-08-22 13:02:15 -07:00
Shuchang Zheng
c1b676f85e upgrade litellm to support gpt5 reasoning (#3218) 2025-08-17 16:39:37 -07:00
Shuchang Zheng
e356d9fea0 add support for gpt5 and azure gpt5 series (#3136) 2025-08-07 15:12:47 -07:00
Shuchang Zheng
ffce05c6ef Temperature fix for O-models (#3048) 2025-07-28 14:31:10 -07:00
LawyZheng
95ab8295ce laminar integration (#2887) 2025-07-07 14:43:10 +08:00
Prakash Maheshwaran
d23944bca7 fixed the openrouter stuff (#2630) 2025-07-01 14:02:22 -04:00
Shuchang Zheng
eb0e8a21ee add gemini 2 5 support (#2850) 2025-07-01 13:38:17 +08:00
Asher Foa
a6bf217559 Fix typos (#2807) 2025-06-28 01:26:21 +00:00
Shuchang Zheng
5f26a02dea skip llm artifact creation when empty prompt (#2742) 2025-06-18 14:44:10 +00:00
Wyatt Marshall
346b36fa4d ui tars integration fix (#2714) 2025-06-13 16:52:14 -04:00