ai-strategycommunityengineeringeducation

Advising, Not Lecturing: What I Heard at CADSCOM 2026

Arun Batchu·April 18, 2026·7 min read
Share

The best way to advise is to listen first.

I drove down to Minnesota State University, Mankato for CADSCOM 2026 — the sixth Colloquium on Analytics, Data Science and Computing — to do two things: grade a round of student research presentations and sit on an industry panel. I ended up with something I did not expect: a sharper read on where the field is heading than I would get from a quarter of trade-press coverage.

The verdict: Students are doing more grounded AI work than the headlines give them credit for. Industry practitioners are quietly converging on a very different future than the one the hype cycle is selling. Showing up — in person, to listen — is still the fastest way to learn where the operating edge really is.

A road trip with good company

The drive down itself was part of the value. I rode to Mankato with two colleagues from the Twin Cities AI community:

  • Justin Grammens — founder of Recursive Awesome and Lab651, president and co-founder of Emerging Technologies North, host of the Conversations on Applied AI podcast, organizer of the Applied AI Conference, and adjunct professor at the University of St. Thomas. Justin has quietly built much of the scaffolding that holds the Minneapolis–St. Paul AI community together.
  • Senthil Kumaran — CIO of MNGI Digestive Health, adjunct professor at Concordia University (St. Paul) and at MNSU, and a member of the CADSCOM AI Advisory Panel. Senthil brings a rare combination of 30+ years of enterprise architecture and a live view of AI deployment inside a healthcare system.
Justin Grammens, Senthil Kumaran, and Arun Batchu at the MNSU Centennial Student Union

Two hours on the road with the two of them was its own kind of briefing. By the time we pulled into the Centennial Student Union parking ramp, we had already compared notes on three of the things I was there to hear more about — agentic workflows in production, the small-language-model shift, and what the next generation of engineers needs to be fluent in.

Meeting the host

CADSCOM is chaired by Dr. Rajeev Bukralia, a full professor in Computer Information Science at MNSU and the founding director of the university's MS in Data Science and MS in Artificial Intelligence programs. I am one of Rajeev's industry advisors on the AI program — which is what brought me to Mankato in the first place.

Arun Batchu, Dr. Rajeev Bukralia, Justin Grammens, and Senthil Kumaran at CADSCOM 2026

Rajeev founded CADSCOM in 2018 and has built it into the flagship event of the Twin Cities ACM Chapter and the MNSU data-science and AI programs. He also co-founded the DREAM student organization (Data Resources for Eager & Analytical Minds) in 2016. The students in that photo with us had organized the day with a level of care that suggested all three of those programs are in good hands.

The opening: AI as a general-purpose transformation

The University President opened the day by framing AI the way it deserves to be framed: as a general-purpose technology reshaping education and industry, paired with a real obligation around ethics, equity, and responsibility. Awards followed for the organizers who made the event happen — Dr. Ismail Bile Hassan (Metropolitan State; Chapter Chair of Twin Cities ACM), Dr. Mansi Bhavsar, Dr. Lauren Singelmann, Katie Schuman, and Rajeev.

Two things were striking from the opening. First, the organizing work behind an event like this is itself an AI-era skill — the ability to coordinate faculty, students, industry, and sponsors into a single afternoon of exchange. Second, the framing was adult. No breathless AI-will-change-everything. No AI-is-overhyped. Just: this matters, it has obligations, let's do the work.

Student research: grounded, specific, and better than expected

I will admit I walked into the student presentations expecting a mix of polished and rough. What I got was consistently grounded work with clear problem definitions and honest accounting of what did and did not work. A sample:

  • Apple-ripeness detection for smart agriculture. A computer-vision comparison of customized YOLOv11 vs. YOLOv12 models, aimed at automating harvest timing and reducing waste. The presenter was honest about the dataset limitations and what a production deployment would require.
  • Aurelius — emotion-based music recommendation. A framework using MFCC audio features to map the emotional feel of a piece. The meaningful insight was the deliberate move away from click-behavior signals, which so much of the recommendation industry still leans on.
  • Small language models for translation. A comparative analysis of Llama 2 and Mistral using QLoRA for Spanish-to-English translation. The finding — that the models struggled with verbatim lexical accuracy but captured overall semantic meaning well — is exactly the kind of calibrated result the industry needs more of.
  • Customer behavioral segmentation. K-Means clustering and Random Forest on retail invoice data, revealing that when a customer shops is a primary discriminator between loyal high-value shoppers and seasonal deal-seekers. A simple, useful insight that a working analytics team could act on tomorrow.
  • NCAA Division 1 decathlon analytics. An analysis showing that performance in discus and shot put had the strongest alignment with final overall placement. Specific, testable, quietly interesting.
  • Tessituragrams for vocal repertoire selection. A data-driven framework matching classical art songs to a singer's vocal range and duration capabilities — explicitly designed to help vocalists make objective choices and prevent vocal injury. This was one of the most useful reminders of the day: data science is most powerful when it is in service of a specific human concern.

What connected all of them was not sophistication. It was specificity. Each student had a real problem, a defined data set, and an honest account of the gap between the result and the use case. That is the habit of mind that turns into a practitioner.

Key observation: The best student work was not the one with the most advanced model. It was the one with the clearest problem statement. That ordering matters more than any technology trend.

The academic panel: research agendas in the age of generative AI

The research panel, moderated by Rajeev, featured Dr. Deepak Khazanchi (University of Nebraska Omaha) and Dean Mohammad Alam (Dean of the College of Science, Engineering and Technology at MNSU). Their advice to graduate students was unfashionably grounded: interdisciplinary coursework, persistence, and finding good mentors.

The more interesting part of the discussion was about the ethics and operating reality of generative AI in academic research. Both panelists acknowledged the surge in AI-generated papers and AI-assisted peer reviews. Their position was not to ban the tools — that horse has left — but to insist on transparency and on keeping the researcher as the expert in the loop.

The phrase that stayed with me was "expert in the loop." That is the right framing. Not human in the loop, which too often means a rubber stamp. Expert in the loop means the judgment, the accountability, and the synthesis still sit with a person who understands the domain. AI accelerates the drudgery. The expert still does the thinking.

The industry panel: a quieter, more interesting consensus

After lunch I joined the industry panel alongside practitioners from Thomson Reuters, MNGI Digestive Health, and Recursive Awesome / Lab651. The conversation was unusually aligned for a panel — less because we had coordinated beforehand and more because the operating reality is pushing everyone to similar conclusions.

  • Agentic AI is already changing the unit of work. In software engineering, agents are writing large volumes of code. In healthcare, they are handling clinical transcription and billing code assignment at scale. The unit of human labor is moving from producing output to evaluating and governing output.
  • The skill shift is real, and it favors generalists. The advice to students was blunt: do not marry a specific programming language. Focus on adaptability, full-stack understanding, and an entrepreneurial instinct. The value of knowing how an entire system works — end to end — is going up, not down.
  • Small language models are the quieter revolution. Every panelist surfaced the industry's shift toward SLMs as a first-class strategy, not a fallback. Local execution. Better data security. Lower cost. Meaningfully lower environmental footprint. The hype still orbits frontier models like GPT-4, but the production center of gravity is moving toward smaller, fit-for-purpose models that can be deployed inside the firewall.
  • STEM outreach has to start with application. On the question of how to interest younger students in data science, the panel converged on the same idea: lead with real applications — sports analytics, robotics, music, health — and let the math follow. Starting with abstract mathematics loses the room before the value lands.

The deeper pattern: The industry is not heading toward "bigger AI." It is heading toward distributed, governed, workload-appropriate AI — smaller models running closer to the data, agents doing the bulk work, and experts doing the judgment. The students in the room were closer to this reality than many executives I talk to.

What I took home

Three things struck me as I drove back.

First, the posture of advising matters more than the content. I came to Mankato with a full deck of things I could have said. Most of them would have been less useful than what I heard. The students and the panelists made the better arguments for why showing up, listening, and then adding specific comments is a better contribution than a keynote.

Second, academic and industry perspectives are converging faster than they used to. The concerns — ethics, accountability, the expert in the loop, the shift toward smaller fit-for-purpose models — came up in both rooms. That is a good sign. It means the conversation is maturing.

Third, catalyzing a few students is the highest-leverage thing a senior practitioner can do on any given afternoon. Grading a presentation with specific, usable feedback. Asking a follow-up question that turns a prototype into a research agenda. Pointing a student toward a book or a problem they had not yet considered. These small inputs compound over careers. The network I came in representing only exists because someone did the same for us, somewhere along the way.

The network is not just senior experts serving clients. It is senior experts paying wisdom forward to the next cohort. CADSCOM reminded me that is not a side activity. It is the practice.

Thank you to Rajeev Bukralia, the DREAM student organization, the Twin Cities ACM Chapter, and Minnesota State University, Mankato for the invitation. Already looking forward to the next one.

Found this useful? Share it.
Share