Our team is excited to meet you. Book a time that works best.
Companies pitch school leaders constantly. VCs and philanthropic funders are pouring resources into AI experiments. The hype around AI and chat-based interfaces has never been louder.
We've spent the past few months thinking about, experimenting with, building, and even dreaming about how we address real district needs with AI. The result of this work is the SchooLinks Agentic Layer that we recently announced. I want to explain how we got there.
At SchooLinks, context is the foundation of this work. It's what determines whether AI is actually moving the needle in schools.
When we say context is king, we mean it in two ways:
First, the broad sense: AI only becomes useful if it's contextualized to the realities of schools — the everyday logistics, the counselor bandwidth crunch, the workflows that actually move students forward. Without that grounding, even the slickest AI solution risks becoming a distraction instead of a help.
Second, the technical sense: AI models are only as strong as the context windows we give them. That means bringing in the right student and district data at the right time, and tying the model's outputs directly to workflows. Done well, this reduces hallucinations and turns raw text into actionable steps that counselors and students can green-light instantly.
And to see why this foundation of context matters, it helps to look at where chat-based AI starts — and why the lack of context has been the biggest barrier to schools realizing lasting value.
It's easy to see why chatbots have become the poster child for AI in education. They're intuitive. They look impressive in a demo. They lower the barrier for students to ask questions without navigating complex menus. And they are also very easy to build.
We saw the promise — and we put it to the test. We built and released a chatbot, rolled it out in beta, and gave districts the chance to try it. We had two scenarios — one that was a facilitated reflection and discussion about a student's career goals and another about college choice. The results were not what we had hoped.
District leaders appreciated the effort, but the feedback was consistent: it wasn't differentiated or useful enough to solve their real problems. This wasn't about our particular execution — it was about the limitations of the approach itself. Chat, on its own, produces answers that are often too generic. The delivery and exchange of these answers is too disconnected from the real workflows of schools. Students who were already motivated gained some value, but those who needed the most guidance weren't served by the tool and struggled to interact with it productively. This seemed to be a common challenge for other chat-based experiences that districts had tried as well.
The reason is simple: chat interfaces force people to build context manually, one prompt at a time. Humans struggle to consistently provide relevant information, resulting in slow, generic, and sometimes incorrect results. That slowness of context-building is the biggest reason LLMs haven't yet delivered lasting value in schools.
Some students certainly like using chat interfaces and can give them enough context. Conversational interfaces with LLMs create a sense of connection. But there are also risks in relying too heavily on open-ended chat for such a uniquely human endeavor as planning a future. Parents and educators are right to want safeguards and oversight around how students interact with these technologies.
Staff lack the bandwidth to manually create contexts that would make these chat tools useful at scale and across multiple students. The risks of data contamination between students are also high.
Say that a staff user takes the time to feed a chat-based tool the right data and context they need, there is still what I think of as a "last mile cognitive canyon." Chat-based tools produce text. It's up to humans to clean that output, interpret it, and take action in another system. That's still work—maybe with less thinking, but still not giving time back to them. It feels like friction.
We also learned something else: chat doesn't naturally provide the cognitive scaffolding many people need. Text and bullet points can be helpful, but they don't offer the spatial or conceptual structures that help students and staff process and retain information. Visual organization, patterns, and clear workflows are often far more impactful. Simply displaying a conversation log without organizing it into a system where the user can commit to next steps, remember, or catalog insights makes it harder to turn reflection into action.
Ultimately, this beta reinforced a broader truth: chat is a good starting point, but it's not the destination. What's missing is the structure and context that turns words into actionable steps to support students' college and career readiness journey.
As we've adopted AI-supported workflows in our organization, we've seen great successes, excitement, and some encountered trepidation or frustration. The warmest embrace of AI has been, not surprisingly, in applications that reduce and eliminate tedious tasks.
We've had many discussions about AI with school and district staff members in various roles. The counselors, CTE staff, teachers, and others we support in schools need to handle two very different kinds of work:
The best and most effective student-facing staff in schools are deeply motivated and energized by the first kind of work. When logistics consumes staff time, they have less time the conversations that truly matter. Those conversations are the one thing technology shouldn't replace. There is no more efficient way to build trust or unlock motivation than for two humans to sit down together.
A major goal of our work on the Agentic Layer is to rebalance how staff use their time by handling the repetitive, time-consuming tasks.
Our philosophy is simple: let humans focus on humans. Technology should clear away the clutter, not compete with connection. And we know districts have heard these promises before. That's why every agent we're launching is tied to a specific, measurable task — so counselors, teachers, and support staff can feel the difference in their day-to-day, not just hear about it in theory.
So what is the Agentic Layer, really? It's not an abstract idea — it's a set of AI-powered tools that handle well-structured, automated data context management inputs and integrates outputs from LLMs directly into workflows that school staff already embrace inside the SchooLinks platform.
Instead of AI sitting on the side as a novelty, it becomes part of the everyday flow of work: reviewing academic plans, approving course requests, processing transcripts, drafting notes. Agents understand student context, generate insights, and queue up the exact next steps for staff to review and approve.
The only reason we can do this well is because of the foundation we've been building for a decade. For years, we've been the company willing to do the decidedly unsexy work at a level of fidelity higher than anyone in our market space: wiring up course catalogs, automating transcript processing, managing integrations, and cleaning up data flows. Our team and the district partners we've implemented with have collectively forged a culture of doing whatever it takes to get the data flowing and set up. We've seen this pay off for districts that had failed to move course planning from paper, not just one, but two or even three CCR platforms or SISs before us. This has allowed us to bring rigor and structure to schools' most high-leverage planning and goal-setting processes.
We have created a framework that, at scale, allows each Agent workflow to automatically retrieve rich and relevant platform data along with optional user input to pass to LLMs to do valuable things for your staff members. Well-cataloged, predictable data is the only way to deliver on the promise of actually useful AI.
So that is the input. What about the output?
This last step of each of our agentic workflows is essential. As we alluded to before, chat-based experiences in education require staff to interpret the AI's text and then take action somewhere else — sending a message in Remind, changing something in a student's course plan, writing and uploading a document. Each of these actions requires a staff member to pull up another program, shift into another mode, and manually take an action. This work involves a lot of focus and executive function — and it's not energizing for staff.
Staff can launch agents directly from the pages where they typically spend the most time writing, reviewing, and working. After they run, they serve up the exact next step a user would take with whatever output they generate — pre-filling a letter of recommendation, composing a message to share a relevant opportunity with a specially selected group of students, generating comments and feedback on a SMART goal, or resume.
Overall, the process is much more straightforward than using any other AI tools. Staff aren't loading data, prompting, and then copying and pasting info out — ALL of this happens automatically in the SchooLinks platform. They provide additional input or requests, review and adjust outputs, and press go.
We're not just talking about the Agentic Layer — we're launching it. Starting this fall, districts will be able to use a set of eight specialized agents designed to help counselors accomplish the kinds of time-consuming tasks that weigh on their days. These agents aren't theoretical. They're embedded directly in SchooLinks, and they're ready to support staff across a number of three different broad areas:
As staff get comfortable with these agents and moderate their suggestions, we'll collect and analyze feedback. Later in the school year, we'll carefully extend some of these agents to students — once we know they're reliable, differentiated, and grounded in real workflows. That way, when students do engage with agents, their staff members will feel comfortable with the output. The result is more relevance for students and more time for staff to focus on deeper conversations.
And we'll always be transparent in how we do all of this. We are committed to transparency in our use of LLMs, statistical models, algorithms, and rule-based approaches, as these strategies collectively play a crucial role in helping schools navigate a complex reality. Education leaders need to understand the sources of insights and input for their staff and students, so they can trust and explain them to families and students. By using LLMs in a more controlled environment than free-flowing chat, they can be both safer and more reliable.
We see this moment as bigger than just a product. We believe this is a generational opportunity — not just for SchooLinks, but for K–12 as a whole — to define how a vertical like post-secondary readiness harnesses AI and keeps humans at the center of the equation.
There's a lot of energy flowing into AI in education right now. That's exciting, but we've also seen that when efforts are vague or exploratory, they rarely become part of an organizationally integrated technology stack for the long term. The real measure of value is whether solutions make a meaningful difference in day-to-day work and if they stay relevant over time - our work will set the standard for impact.
Releasing, building, and iterating on the tools used daily by thousands of schools and millions of students is what gives us energy and has kept us working on this for the past 10 years. It's how we've pulled together one of the best technical teams in ed tech. Building agents that leverage LLMs to have a transformative and lasting impact on how we prepare students for their futures will be the next exciting chapter for us.