In this episode of PodMagic, Bruce Kornfeld, CPO at StorMagic, sits down with Ken Miller, Chief Architect at Cubic DTECH Mission Solutions, for a candid conversation on edge computing in the defense sector. From balancing cloud vs. edge workloads, to mission-critical hardware reliability, to the role of AI at the tactical edge — Ken shares how military environments push the boundaries of IT infrastructure. They dig into the realities of operating in harsh conditions, handling massive data from sensors and video, and why simplicity and resilience matter more than ever.

Transcript

Bruce Kornfeld: Welcome to PodMagic, real conversations about solving real IT challenges. My name is Bruce Kornfeld and I'm your host. I'm the Chief Product Officer at StorMagic. We are always exploring new and innovative ways that simple, reliable technology can benefit you and the people that you serve, whether that's retail stores, branch offices, or in this case, for today's guest, military installations that we can't really talk about. My goal is always to bring interesting guests, deliver some value, and have fun along the way.  

Today we have Ken Miller, who is an expert in edge computing within the defense space and currently serves as Chief Architect for Cubic DTEK Mission Solutions. He's been with the company for over 17 years and resides in the Washington DC area. Welcome Ken. 

Ken Miller: Thank you very much for having me on the show. Obviously, we want to stay as up to date and upfront with the latest tech and trends as we can and I think you'll actually find that it's pretty similar overlap to the sorts of scenarios and solutions and problems that your customers are trying to solve versus our users. 

Bruce Kornfeld: There's a lot of overlap in what we do. We focus on commercial, you focus on defense. Some of the language might be different, but at end of the day, serving customers or users that have compute needs at small sites or remote sites, they have the same pain points. I'm looking forward to the conversation. We're both technologists in our own space. We should have fun and hopefully an engaging time here. 

One of the things that we talk a lot about in our world, and I bet you do too, is this whole notion that over the last 10, 20, maybe more years, it's cloud first, it's cloud everything. But edge computing is a strong term that keeps growing. How do you think about cloud versus edge and how do your users think about it as well? 

Ken Miller: I would say the two technologies are very complementary. You want to be able to use the same kind of capability at both locations. You want to have on-premise solutions available for a variety of reasons, whether that's data security purposes, archival purposes, or fast action that doesn't necessarily rely on transmission media, be that, fast fiber optics, or in the case of our users, different radio backhauls, be it terrestrial radio or SATCOM. Anytime you have that kind of long transmission path, you are introducing delay, and in the compute tasks that our customers are looking at bringing online, that latency is really important. Having the benefits of the cloud, in terms of data availability and security combined with the benefits of on-prem or edge computing, in our user's terminology is great because they want local copies available for faster decision making, faster analysis, faster processing, and more granular control. 

Bruce Kornfeld: In your use cases or when you're talking to end users and organizations that are interested in your technology, are they typically making a decision and asking for your help on, should this be run locally, at the edge, on-prem, or should we use the cloud? Or do you find that they have already made this architectural decision and they are coming to you with, here is what we need, can you build it for us? Which one do you see more? 

Ken Miller: It's really a split between the two. Some architectures are really all about hosting applications at the edge. In terms of email services for local users where you might have a disconnected environment, but you still want to be able to send at the company level or the battalion level. You want to be able to send emails out even if the main services aren't available. That's an example of something that has to remain active. Or chat services have to remain active even if your backhaul isn't necessarily online and available. 

Bruce Kornfeld: As far as the technology that you use, I suspect that some of the use cases  you can't talk about, but maybe you can genericize. What kind of hardware are you talking about? What kind of form factors do you typically deploy for your users? 

Ken Miller: We got our start about 20 years ago, bringing together multiple different kinds of communication technologies, mostly centered around voice transmission. If you have a secure phone, they have to maintain protocols and very specific timing with each other in order to function. As voice technology got more capable and more able to maintain those connections and guarantee assurance when it comes to stability and security and reliability, a lot of these services started getting transferred to digital workloads that run on top of traditional x86, 64 architectures. What that means from our point of view is we had to shift from sort of a telephony background to a more general purpose compute networking background. 

When it comes to the kinds of technologies that we want to integrate, it's really very similar to what anybody would need in an office type scenario. There's analog communications in the form of phone calls, there's video calls, there's video surveillance. I'll give you an example, if you've got some cameras monitoring a situation, you want to be able to very quickly use the technology that's available today to analyze that video footage, identify if somebody is maybe where they're not supposed to be, track them between cameras, and give some context to what's being shown in that footage. That's an example of ways that more high-power compute might be brought to bear for our user set. 

Bruce Kornfeld: Do the organizations that you serve have their own custom built homegrown software that you generally aren't privy to? You just have to know what the specs are to run it, or do you see people using commercial off the shelf? Like video surveillance, would they be using the same types of video surveillance software that we see in the commercial side. 

Ken Miller: The answer is no, they're not using your generic off the shelf video surveillance programs. Much of what a modern defense organization does these days is all about integration. There are assets when it comes to air, ground, sea, cyber, space. All of these organizations work together to provide a really holistic, common operational picture. What that means is there's a lot of different kinds of data that are coming in from different platforms. You've not only got video footage, but you've got everything from acoustic sensors, ground tremor sensors, even body worn sensors like heartbeat monitors and individual tracking information, and any other kind of sensor that you can think of.  

When it comes to the data aggregation needs that our users have, they're really unique and maximalist as far as what they want to keep track of. Not only a live feed of what's going on across the entire operational area, but also the context for how that affects the broader picture as well. Even coming into things that you might not think you need sensor platforms on, like a vehicle's fuel status. That's very important when it comes to, how long can this asset remain operational. That kind of software, it's a real big tangle of moving parts that our users have to be able to get access to. There are organizations within the space whose mission is to buy all these things together. I've been fortunate enough for the last 17 years to work with DTECH Mission Solutions to provide the hardware set that runs a lot of these capabilities.  

When we talk about requirements around sizing, CPUs and storage and RAM and all these other things, a big part of the challenge for the program offices is they have a limited budget. They have a directive that's changing consistently. They're trying to get the most compute for their money and at the same time, reduce the logistics burden for current and future commands. 

Bruce Kornfeld: The software solutions we provide at StorMagic seem to be similar in terms of the things that are tracked. We see a lot of IoT, we see a lot of device tracking. Video surveillance is a big one. Hospitals where they're tracking robotics, they're tracking patients. A lot of the same kinds of things, but in the defense world you're probably going deeper. There are definitely some similarities, but I would say there's potentially more innovation going on because military seems to sometimes lead commercial in terms of developing new methods and new hardware as well. I don't know if you have a comment on that or not. 

Ken Miller: I think one of the really neat things about being involved with our user set is, like you mentioned  a healthcare platform, they have healthcare needs. They have all kinds of logistical needs that you can think of. They have inventory needs, they have transport needs and all of these things have to be not only adhering to HIPAA compliance, but also to other data security standards on top of that. The access control around who can get to what data is very important and the validation that, I am who I say I am, I am authorized to access this. There are all kinds of compliance that has to go into maintaining the data security that you need to keep running an operation like that. 

Bruce Kornfeld: How far away are we before identification will be at the DNA levels? Are we ever going to get there? Right now, it's facial recognition, eye recognition. What about DNA? Is it going to happen? 

Ken Miller: My gosh, that's an amazing thing to think about. This has actually kind of been an inside joke that we've had for years. When we first came out with our very first body worn compute device, we had an inside joke with our engineers that there's the 'every Marine a rifleman' saying. And we actually were talking with one of our folks down range and he said, yeah, soon it's going to be 'every Marine a data center.' It's not that far when you look at what the plan is around data aggregation and being able to operate alone without using or without losing your capability. It's not too far off. We want to be able to pull in all these different telemetries. We want to be able to make sure that everything's going the way that we think it's going to be going, or if it's not, we want options on how to course correct or alter. That inside joke is 10 years old, but it's pretty quickly becoming the reality. 

Bruce Kornfeld: I want to ask one more thing about reliability. Because that's something that we pride ourselves on, on the software side. Our customers want reliable hardware, they want redundancy built into hardware. But at the end of the day, a server is going to fail. What we do on the software side is we just make our software system so reliable that it helps customers survive through any kind of hardware failure and keep systems up and running. I suspect that that whole reliability world is even another order of magnitude in the world that you live. Can you tell us any stories about the need for reliability and what you do? 

Ken Miller: One of the challenges when it comes to reliability in our space is, it's a different kind of reliability from what you think of the traditional 24×7 always up. In a lot of cases our users don't need the equipment of 24×7, but it needs to always start every time you push the on button. Any kind of variable in the environment is not allowed to negatively impact the operation of the equipment.  Thermal or shock or vibe, for instance, if you have to flip a compute platform on and you're on an aircraft, if there are capacitors that aren't rated for necessarily high altitude operations, the electrolytes can literally bubble out and evaporate from those caps. Now all of a sudden, the thing doesn't want to turn on.  

These are the kinds of challenges that our users face and if you think about it, anywhere a human can operate, this equipment has to be operational as well. Not just from a, it has to be brought into that environment, but it has to be there, maybe sitting for a month, completely cold and has to be able to start. That's a challenge that the traditional data center computing environment is not really prepared for. It's a power challenge, it's a materials challenge, it's down to, can the user interact with this when they have heavy gloves on. That's a challenge that we've had to face and address. 

Bruce Kornfeld: We definitely have not had to deal with that before. In the 17 years or more that you've been doing this, what's the furthest from Washington, DC that some of your tech has been used? I'm wondering, is it on earth or have you done anything in space? 

Ken Miller: I am fortunately not required to send things to space, as much as I would love it. It's a level of support that requires an entirely different engineering set. We are proud to support the New Zealand Defense Force the Australian Ministry of Defense the Canadian Ministry of Defense, the US Armed Forces, every branch, UK MoD. All of our Five Eyes allies are utilizing our equipment to accomplish their goals. Be that edge computing, be that IP telephony, be that data analysis and aggregation first step before uploading to the cloud. We've got a variety of folks with different challenges. But at the end of the day, the objective is still the same. 

Bruce Kornfeld: How do you think about data? Let's talk about data for a second. You've mentioned lots of different use cases, body worn cameras and sensors. The influx of data to the systems that you're designing, means you must be growing exponentially every year, every two years, whatever it is. How do you handle all of that? Where do you store all of it? Do you have to design systems that store it? Do you somehow leverage the cloud? Talk about data management, data storage. How do you deal with all of this? 

Ken Miller: When we got started doing compute platforms, we really just had one, maybe two SATA 1.5 gigabit per second disks assigned to a system. There was not a lot to it most of the time. Your most data intensive or your most intensive write or read operations would be just during startup, loading the operating system in, getting your services started. That is absolutely not the case today. We've got systems that are being fielded now, which are entirely reliant on writing things on 25 gig fiber lines, coming in that way, aggregation off 10 gig switches, and they're getting written to RAM and cached there before being disseminated to multiple NVMe drives that are using software RAID like ZFS to ensure data reliability and protect against bit rot. A lot of the challenges are the same as you mentioned, how do you maintain these kinds of speeds? We've got users who are clustering equipment together through hyperconvergence, or through other software applications, I would not be surprised if we had some overlap in our customer sets or using StorMagic there too. 

Bruce Kornfeld: We definitely do, we have military deployments. We do have governments that use our products. I suspect that maybe our software and your hardware do touch each other somewhere out there that we may not even know about, which is good. Another area that I wanted to touch on is the user and how they use your products and the concept of simplicity. Because we hear it over and over again with what's happening over the last decade or so. It wasn't that long ago that before a user sat down to use a piece of software or hardware, they would literally take a training class or they'd read a manual. And these days, those expectations are just gone. Now, of course, we create manuals so that customers can get the information they need, but our products need to be installed and run intuitively. IT teams don't need massive amounts of people. I assume you have something similar. How does that work in the defense world with how simple your products need to be? 

Ken Miller: There are several levels that I'll answer this question to. The first one is the physical level. Equipment needs to be durable enough to survive basically whatever a human can survive. When it comes to how these systems attach to whatever platform they're going in, it has to be intuitive enough that somebody can do it with very little training. Maybe a day that they've got to interact with the system before going on trials or exercises. For shakedown tests. We want the components that are fastening these in place to be robust enough that they're not going to suffer from fatigue in situations like that. But we also want them to be simple enough that it's very evident and obvious what you need to do to remove some equipment of ours in an emergency situation. Let's say that a vehicle's been compromised because of incoming fire and there's data on the server or on the drives of the server that absolutely has to be extracted. That kind of combination or trade-off between protection and quick release is one of the lines that we have to walk.  

So that's number one. We want things to be really easy and intuitive to use. Number two is, you're exactly correct when you talk about complexity around deploying new technologies and software. Not everybody who's a network administrator is going to also be a Linux administrator, is going to be a VMware administrator, is going to be a StorMagic administrator. And the list goes on and on, is going to be an AI expert. In the situations that a lot of our users are finding themselves in, so many of these technologies are working in concert together that they get an introduction to all these components. They know how to navigate a Cisco iOS and a Juniper iOS. They know how to navigate terminals and command lines and IPMI interfaces and any kind of management engine that you could think of. A big part of the mission now is saying, OK, we have all these components that are making real differences, in terms of our end users quality of data aggregation and analysis. How do we admin it without having a direct connection to somebody back at Fort Meade or somebody back in a corporate support role? There are ongoing efforts to make that happen. 

Bruce Kornfeld: It's definitely similar to what we're seeing too. I want to change gears a little bit and talk about something that clearly any podcast has to touch on, which is AI. Maybe AI isn't something that you see a lot of and maybe your users are using it, but tell me, do you hear a lot about it? Is that a requirement that's coming? Is AI changing your design philosophy or things that you have to worry about? 

Ken Miller: It absolutely is. The really neat part about the research that's been ongoing and some of the models that are available now is that they give you the ability to run some of the large language models, some of the inference engines, some of the analysis tools, on relatively low powered hardware. I'm still talking about 50 to 250 watts. Which in the grand scheme of things for a data center or any really fixed location doesn't sound like a lot. But if you're three or four people on a side by side somewhere, 250 watts is a good portion of your alternator in that situation. What's really neat about the things that Nvidia and AMD and Intel are doing right now; is those workloads are able to be run at edge environments without needing links to the outside. I like to go back to surveillance because there's a lot of progress being made, you could have a camera on a vehicle that's pointed downrange and there's a small Nvidia chip that's just looking at the feed. Then as soon as it says, OK there's a person here in this frame. It alerts the human so that we can get real eyes on. That's one example.  

You can also have an example of after action review. That same side by side has come back to wherever their forward operating base is, they upload some of the footage, and somebody says, OK I would like this platform to show me the timestamp ranges of the video where it's nighttime. And have the large language model be clever enough to go through the video and give you timestamps on when the sun came up and when the sun went down. That's just one example. You could have it framed around, I want footage from this location, I want footage under these conditions. Those are some ways that having access to both analysis tools and inference engines is really important. The more that these are able to be run on smaller edge hardware, the faster we're going to see the adoption. It's already happening. We've got several customers who have come to us and said, we have a certain requirement for tokens per second generation under X model LLM. When we go around to designing a certain platform, that's what we're going to target with those requirements. 

Bruce Kornfeld: We're seeing similar things where cloud is getting a lot of attention around AI. I even read articles that say you must use the cloud for AI, et cetera. That seems to be shifting a little bit, in that maybe the large language models are being done in the cloud. But when it comes to small sites, the inferencing, the decisions that have to be made probably can't be cloud dependent due to latency like you were talking about earlier. What do I need for infrastructure at the small sites to be able to run a piece of my AA strategy, whether that's the inferencing model or fast decision making? In the world of autonomous vehicles, it's certainly happening that technology is becoming so powerful in the cars themselves that they can make some pretty good decisions. I tried out the Tesla autopilot, didn't love it. It wasn't great, but it's getting there. There are some good decisions that are made about when to hit the brakes, to flash. Things are happening. In the defense world, are they heading towards a world where in the battlefield, humans are no longer even making decisions. It's all AI on when to drop the bomb, how to drop the bomb, when to shoot a gun. Is it all local AI? Could that happen at some point in time? 

Ken Miller: My answer to that would be from a possibility standpoint, we're not that far off. In fact, we might be there already. From a philosophical standpoint, I would hope that a human is involved in a trigger pull. 

Bruce Kornfeld: I wanted to close with one last question for you, open-ended question. Where do you see edge computing going in your space in the next two, three, five years? Answer it however you like. 

Ken Miller: I'm personally of the opinion that edge computing is going to become more important than cloud computing as the world has more complex conflicts. What I mean by that is the more that we have near peer adversary conflicts, superpower against superpower. The ability of an adversary to deploy either data jamming or interception technologies, it's not that difficult for them to do. The trade-off of having more or less capable platforms in the form of an edge compute strategy is that you get flexibility on how you deploy each of those nodes that has edge compute AI capability is going to be more autonomous than a unit that doesn't. There will always be a majority of that work done in the cloud, but to go with an all or one strategy, I think is short sighted. It really benefits both sets of technologies to have the option to go either way. 

Bruce Kornfeld: We see the same thing in our world in the cloud. Cloud doesn't go away, but there's so much activity where data is created, whether that's retail where the customers are or in factories or IOT, there's so much happening outside of a data center that edge computing technology is just going to continue to grow. Most of the analysts say the same thing. The $200 billion industry growing to $500 billion. We're aligned there in terms of the future.  

Ken, thank you so much for joining. It was great meeting you, great having you, and hopefully we'll have you back sometime as well. 

Ken Miller: This has been a really great opportunity, thanks for having me on. I'll just close with this because I happen to notice it on my bookshelf over here. One more thing for your last question, General McChrystal in his book, Team of Teams, talked about how his ability to outmaneuver his adversaries was based on giving his individual commanders more autonomy and more trust, and compute at the edge is really all about enabling that kind of critical thinking and critical response. I'll leave you with that. 

Bruce Kornfeld: I love that analogy. That's a great analogy. Thank you so much.  

And thanks to those of you who stuck with us. Appreciate the time. Again, I'm Bruce Kornfeld, Chief Product Officer at StorMagic and this was our PodMagic show where we're having real conversation