Skip links

Breaking : LLMs now supports video contexts.

What’s up family?

Say Hello to visual-context for LLMs

There’s a cool new way to use audio-visuals as context for your LLMs and AI agents. You can basically take a video (hosted anywhere on internet) and start chatting with it.

Currently, it has some cool use cases –

  • VCs can use it to shortlist applicants based on their pitch. – Learners like me can ask doubts to their video lectures & tutorials.

  • Analysts can summarise videos and just start chatting.

  • Creators can train their agents on all of their YouTube videos and create a clone of themselves. –

  • You can skip boring tech conferences and can just ask ‘What BIG tech did you made this time?”

  • And few unexplored use-cases.

Join the waitlist here –> http://DeepTrain.org/waitlist.

 

 

Robert Scoble has already supported this and your support is much needed. Please mention your opinions on social media and tag me @AkhouriUdit so I can also respond to it.

Link of the whole tweet : https://x.com/AkhouriUdit/status/1822727278327140663

The descriptive research paper will be out on Arxiv soon.

Meet you at the waitlist.

Regards

Udit Raj

Leave a comment

🍪 This website uses cookies to improve your web experience.