An audio recording was taken of most of this event. I may use it to refine this piece at a later time.
Venue preamble
“The Most Innovative IEEE Section in the World”
AI in engineering is going to be complicated subject.
15% discount on online ECE graduate courses at CSU
- Some-odd $2k savings
- Wow! That’s one month of rent!
Vergent Products
Sponsoring assistance for this event. They are local.
I’m glad a long-lived company like this exists here, but they know that networking is complicated and must be forcefully pushed out because people don’t know that they’re here.
As I listen tonight, I will be thinking about how people are bad at marketing by default. We accept that people just show up, and I think we all know that’s not how anything works. We trust the universe and the fruits of our labor. It seems merits aren’t enough.
The Panel
Some of this discussion will be about introductions and backgrounds. My interest is not in the technical vomit of documenting them, but I will try to highlight what they’re doing that is unique and bespoke.
I will not show their job titles, because that information is available at the RE link. In addition, I won’t feel compelled to write about how they agree with me unless it is an under-represented piece of speech.
Greg Nuccio
- AI is the next information revolution.
- The tool and the murder weapon. (It’s a CLUE joke.)
- Leveraging data
- Start from treating AI Output as a Failure Mode
- Data Governance is the New Architecture
- Quality
- Lineage
- Structure
- Foundation Data
(Sid) Siddharth Suryanarayanan
- This is a government programs engineer, with a focus on infrastructure
- He says that they don’t market themselves well but that they are everywhere
- It seems government groups enjoy an ease of funding. Even when the secrecy isn’t really the goal (he’s talking about it in public, after all) it seems government has a marketing hack.
- Data center cooling, solid state transformers
- Defense customers, TRL6, TRL7
- He’s talking about ://www.nasa.gov/directorates/somd/space-communications-navigation-program/technology-readiness-levels/, technological readiness.
- TRL6 is a prototype
- TLR7 is an implementation (and not the top of the scale)
- Never expected us to be recommissioning nuclear power
- He’s finally talking about energy
- Oooh, he’s talking Aldous Huxley: The (Real) Devils of Loudun
- Look into the data centers in northern Virginia
- Stranded investments. I was questioning his relevance to the bigger conversation, but this last bit got interesting. Modular reactors.
- He’s pushing on the Gallium oxide, gallium nitrate. He wants it to come, stop talking and do.
He’s talking about modular reactors and I want to hear more.
-
Later, he sort of lost the plot and Needed Us To Believe his “it’s just the facts” line about how we’re wrong to be wary of nuclear power. I think nuclear power is unavoidable now, but I don’t adopt his (lack of) reasoning.
- ://www.terrapower.com/ (traveling wave)
- ://ground.news/article/us-nuclear-fusion-start-up-backed-by-sam-altman-and-peter-thiel-secures-425mn
Andrés Sepulveda Morales
I got to sit next to Andrés at the networking segment, before I realized he was a panelist. Amazing guy.
- Less tangible
- How can humans stay agentic? (sleeper of a question, I love)
- Rocky Mountain AI Interest Group (501(c)(3)–didn’t realize)
- Impact Makers and Barrier Breakers
- Serving people who don’t have the budget
- Tech really isn’t magic, but what you’re trying to do is
- Focus on agency itself. 34% of leaders polled say that employees have left their jobs because of how AI is implemented at their company.
- I resonate with this. I left because my company was brick-headed about AI, would only consult washed up ML engineers who didn’t want to be involved. It was a fundamental mockery of talent, merit, and lived experience.
- When you pawn your autonomy to the people who control the models, where are you left? Are you even wondering.
- Where is your data at 10pm at night? (fun callback to 90s)
- Companies are compulsively creating sparkle icons (the icon for starting an AI agent without knowing wtf it can do or why)
- The prior Defcon, salesforce AI agents were compromised live. “Are you really that surprised?”
- There are some interesting articles about how employees break rules to put data in AI even though they know the risks are unknown (or high) that data breaches are possible. (They barely understand where the risk comes from, and that might placate them.)
- What Are Your Thoughts?
- This is a great sentence that I feel matches my desire to use Instruction Case.
- Keep learning and adapting; Be excited to learn; Honor The Human.
Sudeep Pasricha
- AI has been around for “70 to 80 years”
- I get what he’s saying, but this does remind me of pedestrians who pretend like the future has always been here, just because they see a headline.
- Energy efficiency, fault tolerant, real-time and secure computer
- Academia gets to enjoy having a longer timeline.
- It’s a good reminder.
- Using AI techniques to look at data movement and storage
- New architectures to accelerate architectures.
- Looking at the optical domain, fundamental to most characteristics.
- NSF grant (Fall 2025) to do this sustainable and efficiently.
- It’s a good sign in this era that grants are being given
- Doing some things in the devices, instead of strict CPU/GPU
- I’m not entirely sure what he categorizes as the “device” if not the CPU, but I understand this about the GPU.
- Charging sucks.
- Mobile energy optimization
- Learning how to do usage optimization to manage processes via middleware, mostly on Android as a test bed.
- Edge devices (as a phrase, this is the first time we’ve talked about Edge tonight.)
- Indoor navigation, no GPS
- Code decision for perception in autonomous vehicles.
- This is a “big data” issue for AI, not necessarily as a scale problem of orders of magnitude, but of disparate multi-modal sources.
- IoT-driven weather monitoring
- What, tell me more
- I’m very interested in this because I want to do climate monitoring via unconventional devices
- Department of defense contracting, top-secret clusters, HP labs is funding research looking at carbon- and water-efficiency in data centers.
- AI has enabled pathways for student engagement. They get excited.
- Compute is not easily available at schools because it is expensive.
- Content becomes outdated very quickly.
- Syllabus changes every semester
- This bodes. I’ll have more to writ later.
- Interdisciplinary integration.
- Too many things to master in a university context, inflates credit hours, but promotes more problem-solving talking between departments.
- Universities were trained before AI.
- Assessment and Evaluation
- This is about AI use at school
- Doesn’t blame them, but what are they doing at university that they couldn’t be doing on their own? (He phrased it inverted, but it’s the identical question.)
- Ethical and societal implications of AI
- This is directly touching semiconductors to IoT, he’s covering the whole gambit in this statement
- AI is starting to displace entry-level jobs, and it is worrying.
- What can faculty do?
- How can faculty and students change the way they are interacting with the course so that they are not redundant.
Q&A
I annotate in-flight Q&A by the panel seating order, left to right as viewed from the audience. These numbers correspond to the names in the order I presented them above.
(skipped a couple boring questions)
What’s the energy use path forward? (1) Research, more efficient, reduce training costs. Used the tired metaphor of scaling down costs. This is a blind faith argument, more of the same, the future is the same as the past, etc. (2) Can’t have the cake and eat it too. (Good start.) Moving from a server that has a rack of 72 GPUs that consume a lot of power (small by scale) to 576 GPUs and eat more thermal demand. He’s focusing on chips and their names changing. Chip to the room, room to the building, building to the campus, campus to the grid. Have to focus on the whole strongly linked chain. The isolated cogs are probably a liability here. Renewables are “"”nice”””, but we can’t do it without batteries, and batteries are not optimized for this middle position. Batteries will fail us before our methods do, and so we need better solutions. Big giant NUCLEAR. He wants nuclear. Woah, he’s jumping straight to nuclear disasters. He’s doing damage control to help us understand that we should just do it again. He’s not talking about anything new, just that we should be less scared. He thinks Fukushima is “the worst” nature could throw at us (and I fundamentally reject this). “Those are the facts.” He’s propagandizing that bombs aren’t power. Jumping “these are just the facts.” Losing interest in this person. I don’t disagree with the need for nuclear, but that was not “the facts”. The problem I have is that He Needs Us To Believe that the casualties from Fukushima was Just One. As if we were wrong to be frightened of what happened. (4) We pay a lot for AI. Google is getting into TPU. (He’s not talking about NPU, but he’s in the ballpark.)
What can we do? (4) Hallucinations result in wasted time. We are shocked that they are biased but We Are Biased. (He’s trying to talk about regulation but he’s not sure how to get into it from here.) (3) AI can capture good, bad and ugly, and needs to not concern itself with knowing facts as much as representing what’s there.
The following was my question, though it was rendered differently in speech.
Do you think AI will have to invalidate all embeddings and vectorizations when we learn how to use our data better? I heard data described as fossil fuels, but I think it’s more like Uranium.
I think about the risks of building business that scale with the size of someone else’s dataset. I like exploring the ability to vectorize but I feel lke i’m training before the real event begins. I think we should think smaller.
You can reach Autumn Ryan to be heard about this subject at [email protected]. Do not transmit sensitive or private information if it’s unsuitable for others to have.