About Me

<aside> 👨‍💻 Integrations that speak your language @ ToolCharm

</aside>

<aside> 🏫 Computer Science @ UCF

</aside>

<aside> đź’Ľ Prev. Engineering @ Morgan & Morgan, VianAI, Tompkins Robotics

</aside>

<aside> đź“© Email: [email protected] Twitter: @mark_bruckert Github: @mbruckert

</aside>

Startup

<aside> ❇️ ToolCharm

ToolCharm is the startup that I co-founded.

At ToolCharm, we are building foundational AI to solve software integration problems. Anyone who has build software knows that there is a constant demand from customers/users for their various applications like CRMs, Task Managers, Calendars, Email Providers, etc. to work with the software they use. However, most companies don’t have the engineering time or resources to build out integrations for each of these platforms.

We are currently working on the Actions API to solve this issue. Here’s how it works:

One-Click Authentication, in your UI.

We handle authenticating your users to hundreds of services in just one-click via just an API call so they can be used with the ToolCharm ecosystem.

Untitled

Make the request and you’re done.

Just describe the action you wish for ToolCharm to take (regardless of the service), the input data, and set a desired output format - we'll handle the rest.

Untitled

https://youtu.be/a9nPmlxTzYk

Check out our website:

https://toolcharm.com

</aside>

Projects

<aside> đź’ˇ

Knight Hacks VII [Organized hackathon for 700 people, again] Knight Hacks VII was my 3rd time hosting/organizing a hackathon, and it went very well! We had sponsors ranging from ServiceNow to IBM and BNY Mellon to Siemens Energy supporting the event, allowing almost 600 hackers to work on various projects for 36 hours.

image.png

We had some really cool projects from haptic feedback-powered gloves for physical therapy using computer vision to virtual reality guitar teachers. Some stats that we’re really proud of are that 359 of the attendees were first-time hackers (meaning that they had never attended a hackathon before) and 74% of the attendees were among under-represented communities in Computer Science!

image.png

</aside>

<aside> 🌎

Diffusion Earth [Project for FIU’s ShellHacks 2024]

3rd Place - Overall

Diffusion Earth allows you to turn any 2D image or text prompt into a 3D environment you can explore just like Google Earth. Some cool experiences we demoed over the weekend were the ability to step inside and explore paintings, immerse yourself in battles in the colosseum, and so much more!

https://youtu.be/Ueerm62XOIM

To do this, we perform depth estimation on an image, which we then turn into a 3D point cloud using Open3D. Based on your action (either turning or moving forwards/backwards), we then move the camera in 3D space and render the output into an image. We then create a mask of the now missing points, and use diffusion inpainting to render out the finished reconstructed image. In order to speed up rendering, we also pre-queued creating the surrounding views whenever a new action was taken.

gallery.jpg

You can check out more information and implementation details here:

Diffusion Earth

</aside>

<aside> đź“ş Farro (https://farro.ai)

Farro is a new kind of search engine that generates videos from your search, powered by AI.

What kind of videos can Farro create?

There are also Farro Courses, which take complex topics, uploaded documents like training manuals, and even syllabuses and create up-to-an-hour long courses covering the entirety of a topic, which cannot be covered completely in the traditional 5-minute long videos.

https://youtu.be/kNaV-BS4D2k

You can try it out yourself here:

https://farro.ai

</aside>

<aside> 🪨 GuideStone [Project for Stanford’s TreeHacks 2024]

Introducing GuideStone! We each have our own educational path, and GuideStone understands that. We build out a graph for each and every user that expands as you learn automatically. For example, if you want to learn integrals - but haven't yet learned derivatives, our platform will first generate a lesson on deviates to make sure you have a full grasp on the topic.

MacBook Pro 14_ - 2 (2).png

I mentioned lessons, but you might be wondering.. what does a lesson look like on GuideStone? Well, every lesson comes complete with a video & quiz to test your understanding. How is that better than the earlier mentioned education platforms? Well... Generative AI of course!! Every video on our platform is generated from scratch by GPT-4. We'll dive into how a bit later, but the ability to create videos on topics for a specific user allows us to do some really special things like modifying example animations to be about things you care about and resonate with like hobbies/interests/passions.

MacBook Pro 14_ - 7.png

I also mentioned that GuideStone was self-improving. How does that work? Well, we incorporated a combination of eye-tracking & quizzes to modify your generated videos based on what kind of videos work to better your understanding. While you watch the video, GuideStone is gathering data on what parts of the video you are paying the most attention to without any required input from you. Then, when you finish the video, we use this eye-tracking data to both surface general lesson content, but also specific content we think you might have missed during the video. We use an agentic architecture to then analyze this data, and plan a form of action on how to create better videos (one's which perform better on attention & quiz score) for you in the future.

MacBook Pro 14_ - 4 (1).png

MacBook Pro 14_ - 10.png

Here’s a technical overview thread by my teammate of how it all works:

Owen Burns on Twitter / X

</aside>

<aside> 🧠 GPTeach [Project for FIU’s ShellHacks 2023]

1st Place - Microsoft AI Challenge

GPTeach is your complete video-based, generative AI tutor that can generate complex animations, 2D and 3D graphs, formulas, and so much more. The goal? Visually explain any topic to you, in addition to our TTS generation which explains the topic as well as explains what is going on in the visuals

https://www.youtube.com/watch?v=75fcH3Jl1yQ

https://www.youtube.com/watch?v=hnRSjK3Gi4g

Additionally, along with generating these videos based on prompts or questions, GPTeach generates a question and a set of answer choices that allows you to test your learning from the video. Because we know that LLMs can hallucinate answers, we cross-reference the chosen answer choice with Wolfram Alpha to ensure it is correct. If the chosen answer is incorrect, GPTeach will generate an explanation of why it is wrong as well as a video visually presenting what is wrong in the user's logic and why the correct answer is the better option.

Screen Shot 2023-09-17 at 12.11 1 (3).png

We built a GPT-4 agent that is in charge of orchestrating a storyboard of the video based on the user's prompt/question. We use a Pydantic-based output parser which forces the LLM to constrain it's output to a strict JSON format containing an array of visual and auditory prompts. We then use parallelized setup to execute Google Cloud TTS and a custom agent which generates Manim code. This custom GPT-4 agent utilizes a vector (Chroma) database which we created by scraping the documentation for Manim to run a similarity search to pull relavent documentation on generating the section of the video. The agent is then tasked with generating valid Manim code which we accomplish through a combination of various custom output parsers, which it uses to compile the animations and create the video snippets. From here, our code combines the visuals with the audio, time-matching them to ensure succinct and accurate explanations. At the end, we use ffmpeg to combine all of the video sections into one output file which we present in the web UI.

</aside>

<aside> 🎮 The Apartment [AI for Game Programming Project]

1st Place (Best in Show) - UCF’s AI for Game Programming

The Apartment is a game created with my team for UCF’s AI for Game Programming course. The Apartment is a puzzle-solving, 2D top-down pixelated game where the main character Zephr Wilds explores his apartment building as he notices he is stuck in a time loop where the apartment blows up at the end of every day. Hidden throughout the building are characters to talk to, puzzles to complete, and hints as to what might be going on. Do you have what it takes to escape the loop and save the apartment?

Technical Accomplishments

<aside> đź•ś Takes anywhere from 45 mins to 2 hours to complete full game

</aside>

Play the Game

The Apartment

Watch Demo Video (3x sped up)

https://www.youtube.com/watch?v=QV_3C85TTRg

Game Screenshots

Game Screenshots

</aside>

<aside> đź’ˇ Knight Hacks VI [Organized hackathon for 800+ people]

I am a lead organizer for my university’s hackathon, Knight Hacks. Knight Hacks IV occurred October 6-8th at the University of Central Florida and our annually held hackathon in the top 10% of hackathons by size. The event required us to raise over $45,000 from sponsors like Microsoft, Morgan & Morgan, Siemens Energy, Lockheed Martin, and more. I also build the website for advertisement and registration for the hackathon!

Success Hacks: 700 Students Attend UCF Knight Hacks’ Largest Hackathon | University of Central Florida News

Some really cool projects were created at the 36-hour hackathon from tactile haptic wearable systems to AI Powered Interview Prep in VR.

_DSC2919 (3)-min.jpg

_DSC2924-min.jpg

</aside>

<aside> đź“„ IdeaSleuth [Project for Pinecone Hackathon 2023]

For the PineCone hackathon, my teammate and I created IdeaSleuth! Our project takes an idea description, finds and reads patents from across the globe that are relevant (in any language!), and generates a PDF with the related patents, a detailed analysis of the IP landscape, suggestions to improve your idea, and even a score or how patentable your idea is!

https://www.youtube.com/watch?v=uK-qNa4NVfI

IdeaSleuth takes a description of an idea from the user and uses an LLM-powered agent (built with Langchain) to take this description and convert it into a series of SQL queries which is used to search the BigQuery database of international patents. Once relevant patents have been found, IdeaSleuth scrapes the Google Patent page for the patent in order to get the PDF, and loads all of the PDFs into our Pinecone vector database. From here, we use a GPT-4 agent to run a similarity search on our Pinecone database and answer the pre-set selection of questions, as well as assign the idea on a rating of how patentable it is. Once all of this information has been written, we use the reportlab python library to generate a stylish PDF which makes it easy to quickly consume all of the analysis about your idea and relevant IP. The front end for the application is built with React and hosted on Vercel.

</aside>

<aside> 💪 Muscle Memory [Project for USF’S Hackabull Hackathon]

1st Place - Overall

Created an AI-based Personal Trainer which could interact with SQL databases and navigate the internet in order to store information about users as well as prevent LLM hallucinations by empowering it with real-world data. Utilized Ionic and React for frontend and Python, Langchain, and PostgreSQL for backend.

</aside>

<aside> ⚖️ Legal Hackathon [Lead Organizer & Workshop Leader]

Co-organized a hackathon with only three weeks of planning time, the first event that Knight Hacks ever planned for a sponsor.

I also led a workshop for dozens of participants at the event that taught participants how to build an agent powered by OpenAI GPT-4 and LangChain that utilizes the Spotify API to become your ultimate music assistant. It knows your Spotify history, retrieves information about songs/artists/albums, can create custom playlists, and more. The audience first learned about LLMS, tools, and agents and then followed along as I built the Spotify agent - all in under an hour!

1682918661590.jpeg

1682918664207.jpeg

1682975651695.jpeg

1682975643513.jpeg

</aside>

<aside> 📱 Handheld

Handheld was a no-code, cross-platform mobile app builder designed to make building apps as easy as possible. Drag in native elements, style them yourself or use our pre-built styles, preview on mobile devices by scanning a QR code, and more. You could also add dynamic data by connecting a database or API. There was even an experimental feature using the original GPT-3 API (when it was first in beta, around 2021) to generate apps using natural language.

This was my first real project in high school, and unfortunately it never got released as I got busy with starting college.

The website: https://gethandheld.com

Handheld Demo - No Code Mobile App Builder

Home.png

App Page.png

Editor (Design).png

Data.png

</aside>