Money Focused Podcast
Helping ordinary people, become extraordinary with MONEY! Each weekly episode is packed with actionable tips, expert insights, and real-world advice to help you grow financially and take control of your future. Whether you’re just starting out or looking to level up your wealth, you’ve found the perfect place. Get ready to transform your financial journey, one episode at a time!
Money Focused Podcast
EP 68 - Why Every Company Should Focus on Leveraging AI Responsibly
Explore the importance of responsible AI with my guest Ron HR Johnson, CEO of Tacilent.ai, who brings extensive experience from the FBI, military, and EY’s global cybersecurity practice. Ron offers a unique perspective on the AI industry's growing investments and highlights the vital role of data privacy in establishing trust. Learn how shortcuts in AI development can pose risks and why many companies fail by simply repackaging large language models without adding real value.
We cover the key elements necessary for ethical and secure AI implementation in corporate environments, including clean data, clear business outcomes, and robust cybersecurity practices. Ron emphasizes the importance of continuous training and a balanced approach, combining AI agents with human oversight to ensure accuracy and ethical practices. Early adopters of AI can gain significant benefits by integrating it thoughtfully and responsibly.
This episode also covers Tacilent.ai’s innovative and cost-effective AI solutions. Ron shares insights from his entrepreneurial journey, highlighting the challenges and triumphs of building a company from the ground up. Tune in for practical advice on leveraging AI responsibly and ensuring its positive impact on business growth and security.
📺 You can watch this episode on Moses The Mentor's YouTube page and don't forget to subscribe: https://youtu.be/IWyUZyVZ9lo
🎯Connect with Ron HR Johnson on Linkedin and visit his website tacilent.ai
🎯Connect with Moses The Mentor: https://mtr.bio/moses-the-mentor
☕If you value my content consider buying me a coffee: https://www.buymeacoffee.com/mosesthementor
📢Support Money Focused Podcast for as low as $3 a month: https://www.buzzsprout.com/2261865/support
🔔Subscribe to my channel for Real Estate & Personal Finance tips https://www.youtube.com/@mosesthementor?sub_confirmation=1
Welcome back to the Money Focus Podcast. I'm your host, moses Dementor, and in this episode, I have the pleasure of welcoming on Ron Johnson to the show. Ron is a seasoned expert in responsible AI and he's the CEO of Tasselintai, which is a company dedicated to guiding organizations through the complexities of data-driven decisions. Ron has over 20 years of experience in intelligence, data protection as well as risk mitigation. Ron's insights are invaluable for anyone looking to understand the importance of responsible AI, so let's dive in.
Speaker 2:The business right now is what I call a culminating event. So Tasselin is a combination of life events up to this point. So I spent about 20 years of government service as an executive with the FBI and then also as a special operations intel officer and helped build out some very critical and strategic programs from intelligence risk and looking at data protection. So I say for over 20 years I've been doing sensitive data protection and collection to help get to a synthesized version of clean data which is driving how we build a business. So after the FBI and the military I served well, I joined EY. I was an executive there.
Speaker 2:Rather, the intelligence and data analytics team has a priority in their global cybersecurity practice, and so that's when I realized there was an opportunity and gap in the market which made me leave EY to start Tassel, which is really built around responsible AI. Then also some of those principles that I learned throughout my career. One is like how do you collect sensitive data, but how do you make it credible so you can do something within an action? And our goal around task line is to provide you actionable results from our ability to analyze information, to look at your problems strategically and to give you actionable pieces that you can take out right now and go put into practice within your organization, no matter small, medium or large businesses. We're really built on from a responsible AI company standpoint clean data, providing privacy, transparency in what we build, then also understanding that, hey, our clients have to deal with everything in the 21st century and we simplify that within our platform.
Speaker 1:Nice and that's an impressive background. So 20 plus years in this area is pretty cool, because right now it's really hot to talk about, you know, security and data and all that type of stuff. So the fact that you were in it for 20 plus years, I mean that's that's really great, especially for the FBI. We need more people like you to protect us all. So thank you so much. Let's talk about the need for responsible AI in the market right now. So you know I'm an investor. You know I watch all these you know investor calls and listen and everybody's talking about AI and they're rushing to put all these this capital investments in this area Right and and as someone who's just really cautious by nature, I'm all I'm I'm curious to where, like if the money is being thrown there. It makes me feel like it's some shortcuts being taken. So what's, why is it so important for the market and the economy and the government, everybody as a whole, to really focus on responsible way and doing it the right way? Talk to us about why that is so key.
Speaker 2:Yeah, so it's funny because I'm on that journey to write, like fundamental investor standpoint, like we're in the capital race piece for us as a company grows, step one. Like we're in the capital race piece for us as a company grows, and so one of the things that we're actually doing with just the time back to one of the UIC that's popping up right now is really these there's AI companies that are popping up all over the place. They say they have a product and they're getting a lot of investment, but they're also starting to get exposed to your point as well. Right Cause one, they've noticed that a lot of these companies are just still what they call a wrapper around large language models that already exist. So you think of claude from anthropic, or gemini from google, or from um, from open, ai, gpt. What they're doing is basically just putting a small layer on top of that and just training it to spit out a couple of prompts and on the back end, it's not really doing anything. That's super, like wow and whatever they sell, this huge dream that comes like, oh, it's going to be able to do all these things.
Speaker 2:But one of the things that people don't really understand is like, it requires a lot of effort to get it to spit out what people want it to be able to do, and so to answer your question on like, what is it, what's the opportunity look like there and why there's so much investment. And to me it's one thing it's a high new technology. From that standpoint, it's the next big thing that's going on, and it's a little different from the internet bubble that happened before, because one is being able to adopt it so quickly, but then also, at the same time, one of the things I talk about in the book I have that's delayed just slightly because I've been so heavy on launching this business, which we launched on the 7th of August here in a couple of days, our soft launch, and so one of the biggest pieces of it is people don't understand all the concerns that come in from a privacy standpoint. So when you look at the problems that exist, when you talk about these large language models or implementing AI, it all starts with data and then within that data is privacy. But when you look at these large language models, they're built on the internet, they're built on everything. So when I say everything and none, that's clean, we know what's on the internet right now is basically everybody doing everything, and so one of the things that people don't realize is that there's a lot of challenges. One for those companies that own those large language models every day they're getting sued by some company for stealing their data or their privacy.
Speaker 2:Everything starts back with the data right. If you want to secure those large language models or create a sense of trust in those, you really need to start with actually focusing on the data that's being ingested by those large language models, and that's actually what we focus on from a tasking standpoint, because of my background in national security. We're doing our own specific data collection. That's around national security level collection methodologies and techniques. So we're out doing data harvesting, collection, going out to interviews and pulling in our own credible data set that we can tell you and give you a level of transparency, which is a responsible AI component within how our fine tuning or our small language model that we're building, how those decisions are being made with some other techniques like retrieval, augmented generation, and so, yeah, there should be some skepticism in the system.
Speaker 2:I completely agree, like I'm on the other end of it right now, because that's how people, when I don't have an opportunity to get into the depths of the company. From a technical standpoint, people's like, oh, you're just a wraparound, you're just doing an assessment. I'm like, no, no bro, we're doing way more than just a wraparound of what's already out there. There's a huge proprietary data piece and methodology that's tied to how the company's being built.
Speaker 1:So essentially, the functionality would be pretty much well. Correct me if I'm wrong the functionality of what you do is the same, but the data, what we actually tap into, is what's the differentiator with your company.
Speaker 2:That and then how we validate it. So we have an entire process where we call what's known out there in the industry right now is called human in the loop. We call it human in the loop extreme to where our process allows us to validate those outputs continuously. So current market for a large language model accuracy is around 60 or 70 percent. Our goal is to get the 80 or 90 percent plus. What's different from us? We're not trying to do everything. So when you look at the large language models out there right now, you can ask the model to shave like how do you shave? How do I write a prescription for glasses? How do I sew? How do I bake the cake? Don't ask our model to do that.
Speaker 2:It's specific use cases, like right now, our first go to market is around cybersecurity and risk. That's what it's built for and that's what it's going to be able to do at a very high accuracy and in a very thoughtful way in the responses and they were always training it Right. So like we're training our AI agent His name is Ali. So just to give you a use case example, within any AI agent or even when you're chatting with a copilot or any of these, you'll see a degradation when you get to those responses. Right, it's like, hey, you're asking, ask the same question three times and it'll get to some. In a lot of instances it'll stop responding or to start giving you the same thing over and over, or it's basically what people call is the BS.
Speaker 2:It'll start making stuff up, and so for us, we don't want Oli, our platform, to do that, which our platform is called Reset by Tasselin, and so we don't want the platform or Oli to do that. So when Oli gets to a certain point, think of it as our platform is similar to TurboTax for risk, and so you basically go through and get to a certain point, instead of just trying to make a response up. Oli is connecting with someone, what we call our nerd herd. So because our just to be background in the company, where our logo is a buffalo and really like we want to be a tech company that you can feel and touch, and so the nerd herd thing is all tied to that. But we'll have someone within our extended TASLN expert network that you can speak with that can help you get those responses you're looking for for the specific use cases around cyber in a specific industry, whether it's in logistics, manufacturing, healthcare, fintech, you name it. We have experts that understand cybersecurity and risk in those areas. It's some white glove service.
Speaker 1:Yes, exactly, cool, cool. So let me ask you this so is this designed for small, large, medium large businesses, or is it only for businesses, or can a consumer actually tap in?
Speaker 2:Yeah, so it's mainly focused for businesses, so medium businesses. We do have something that goes all the way to the enterprise level, smaller businesses as well. It just depends on what the growth stage and what those companies can do right. So the company has an AI agent. That's a strategic advisor.
Speaker 2:So you typically will go to our service. You would go to a consulting firm or a group of people to help you with understanding your risk from an assessment, building out a strategy, monitoring, standpoint, technology, economic and I mentioned regulations before. You have to understand all those different things. You typically go to 10 other people for that. We have all that within our platform in one single shop, one single stop shop that is tailored specifically to your organization, based off of our strategy and risk assessment module that you go through at first, which is when you have our first interaction with our AI agent, and that's built on 100 different frameworks and regulations that are known globally.
Speaker 2:And so, to answer your question, white glove service, absolutely, but it's AI first, right? So? But everyone no one understands you first. The reason people have issues with AI is because they think it's a fire for giving. So once you send it out there in the world, you don't have to go train it anymore. That's not the case. In order for you to have a high quality output, you have to always continuously train the AI and provide relevant and new information.
Speaker 1:New data, yeah, and then it's the accurate data. You know, the responsible data. So for companies, now that you know, now that everyone is really trying to adopt AI like I work for a large company, you know, um, and when I tell you they they're trying hard to adopt, uh, ai and people are still hesitant, you know it's still not like, like you mentioned earlier. You said that the um, that the adoption rate is still slow for AI. It might be faster than it was for the Internet, but it's still relatively slow.
Speaker 1:When I talk to people about AI personally and professionally in my business, for who I work for, they still kind of think of it as like a, you know, a better version of Google search. You know what I'm saying, so they're not really understanding the full grasp of it yet. So what I would ask you is like so when you speak to your enterprise clients, what do you think is a benefit for them by coming in now and using a service like you provide and ensuring that you know AI is actually done responsibly, getting to the forefront while it's still evolving as a technology overall?
Speaker 2:Right. So I think a couple of things. One is you start the journey right to your point responsibly with us, with Tasselin, we're doing things the right way. So responsible AI is really built on transparency, trust, making sure you have it, and one of the other things is fairness and ethical. Then the other piece is security, and that's really how we built the company, and so our platform, even going back to what we discussed earlier, is around the data. So like having that clean data source but then being able to show that level of transparency of how the decisions are being made within the algorithm and the output, but then also understanding that it does require an investment from a continuous training piece. But then my book.
Speaker 2:I mentioned a few things for that adoption. It really goes back to making sure you understand what is the actual business outcome of why you're moving to that transformation. If this just becomes the hotness and it's the new thing to be able to implement in your environment, that's great right. That sounds cool, like everybody like hey, me up AI, this right, and that's even for us like AI as a part of our, our, our company and task linkai. There's also the website, but then also as we progress forward, that's not going to be a part of the company name, because we're doing more than just AI. I would go to some other deep learning techniques and some other things around, some data science things, but the piece for us and when I have conversations with clients is well, I understand where you're trying to go, like what is it that you want to accomplish? And then having a strategy and a plan towards that. But then also come up with the governance that goes with it. So have that foundation set in place.
Speaker 2:But if you start that transformation and you don't have the data, that's going to cost you some time and an investment in order for anybody to clean up your data to be able to do that. But a lot of people want to talk about the reason I say responsibly is because that includes ethics and security and privacy, and so that security piece if you don't have a good data security or cybersecurity program, that transformation gets stopped in its tracks, right? So you basically have a lot of vulnerabilities within your environment or you don't have a good data governance program, so that foundational pieces for you to do that transformation is going to always be flawed. So it's starting with the foundation now so you can actually have that you can benefit from the adoption later. And so, for us, all those things I mentioned, those are part of how we build the company, but you really want to focus on also what's the value to the individuals that are going to be using the tool, or using like what is it for?
Speaker 2:And then it doesn't be just a chat box.
Speaker 2:Chat box is replacing your customer service agents, but understand, like, what are the pain points for the people actually using that, including internal and then external, like right now, the reason that one of my biggest pain points that I go through is when I'm using other services and they have a chat agent and it's just awful, like it's not able to understand my responses, but then it asks me the same question a hundred times, or it's just like, or if you call in and you never get to the point to where it's like, well, hey, I need to speak to a person right now and so, like, understand, like, hey, what you?
Speaker 2:You want to be able to have a sense, and he's why we have this very hybrid approach. For us, we're like AI agent first, you go through as much as you can there and that's when you get our white glove service. You get to talk to the nerd or whatever it is from there to be able to progress forward. But then, once we go through that, that's a continuous training model for us. But, yeah, people need to understand why. What's the investment required to be able to get there? But then what's the value of it when it gets integrated in the business? And then for us, the longer you use our platform, the smarter it gets. The longer you use our platform, the smarter it gets, and so it's definitely smart to sign up now, before the price catches up with what the capabilities are.
Speaker 1:Right and I mean that's a good business because it's sticky, you know, once it's an organization, if you want to maintain that, you know high level of accuracy and responsible AI, yeah, you know, you get your get tasseling in there. You know it get your get tasseling in there. You know it's good to go so smart. Well, you mentioned ethics a moment ago. So how can companies ensure that they maintain a high level of ethics when they actually implement AI in the workplace? What are some things that they can do?
Speaker 2:Yeah, so for me, one of the other things I um for internally as I'm building the business too, but then also that I mentioned in a couple of their um in the book and then some other things that we're writing from the company is really around understanding the development process. So if you're going to take off the shelf tool, it's still going to require some training and some hands on from your team to get that thing up and running the way you want it to be. But then when you do that, what is it that you can go and like, what's the outcome you like to have? But then how are you going to make sure that you're not offending someone or you're not stealing someone else's data or you're not exposing yourself to a new type of risk? And so when we talk about ethics, I really want you to bet from a standpoint of going from trying to create a great type approach right, so from inception is really around go back to data, but from my developers and that's understanding biases and mindsets from there. And so one of the things I talk about in the book is from my background, from an intelligence executive and officer over the last 20 years. It's like I have to train analysts to understand their own biases and recognize those, and they bring those to the forefront so we can overcome those as a group.
Speaker 2:And so that's one of the areas you need to focus on when you're building out that building or implementing AI to your environment.
Speaker 2:But it's something you can apply across your business other places as well. Right, we say this is the use case for implementing AI into your organization ethically, but then also like, well, what are those challenges out there? Have someone that's aware which our platform can do that for you? Right, like that staff organization that's monitoring what are those use cases? Or what are those examples of how can I overcome those challenges in my environment? But then also, what is it that ethically for our organization and that's taken to your business? Right, like there's some things that are broad stroke with being inclusive and fair from that standpoint of what I mentioned, I don't overcome that from a development standpoint. But then also, you need to understand that throughout your implementation and constantly checking and verifying, so, like you can't what I said before, it's not a fire and forget thing you need to check that output. You need to check the data sources and continuously validate to make sure that that tool is accurate, as it should be.
Speaker 1:You made the comment because, again I mean, it's a business, right? You're not a, you're not a not for profit, right? So there's going to be an investment for a company to maintain this level of responsible AI. So what's, what's your selling point to anybody who's listening and might be interested about? You know what are some of the benefits, some of the financial benefits to companies' bottom line. You know growth to actually implementing AI the right way using Tasselint.
Speaker 2:Yeah, so a few things. One is cost savings, and so I'd say cost savings and tire so like for the Tasseling approach to it cost saving times and then brand and reputation from there. So when you think about it, so if you have a mishap when it comes to what's being spit out from AI, it depends on what the use case is for in your environment. So for us, like, our solution is really built on helping you understand your risk overall in your environment. So whether that is from AI, from cybersecurity, from financial, you name it. But when we talk about this specific use case, one of the things that the benefit for using us is about the time that it takes for you to be able to validate your programs and the quality of everything. Also, the cost we're a fraction of the fraction of the cost. So I can tell you from the time that I was at EUI, it will cost you roughly from $200,000 to $400,000 for the transformation, based on the size of the organization. And in some instances outside of EUI I saw clients charging around $100,000 to help with the tech transformational implementation. There I mean, we can do from a risk standpoint. We're a fraction of a fraction of that cost, based on the size of your organization, but one of the things from a benefit, we're talking about value. So what's the value that you're getting from those companies that you're working with For us when you're getting a level of expertise that you couldn't really get if you were just to go anywhere else in market right? So I'm a Harvard fellow fellow, my cto's uh brown graduate, also fortune 50 executive, my product managers from mit, and then you have this entire list of tassel and nerd herd individuals that have just as much experience and education as well with hands-on right. So when you go to these traditional type of uh firms, a lot of individuals might not have the experience that goes with the things that they're trying to coach on a fortune 50 or fortune 500 or just even a medium or small business organization. They're regurgitating something that they might have got off the gpt or tower point deck that they've gotten, and so like.
Speaker 2:All of our solutions are highly tailored to the organization, and so when I I answer your question, I think of it from a few ways. One is time saving, so it depends on what type of conversation we're having within your organization. It's a fraction of the time when it comes to how long it takes for us to do the initial piece of the assessment and the strategy build, but then the implementation. I mean, our goal is really to empower you and the organization. So instead of you having to go out and buy all these other tools, buy all these services, we want to give you the resources in hand so you can do as much of it as you can.
Speaker 2:And if you can, then that's what we can come in and help you there. If we can't do it, then we can point you to the direction of some of our strategic partners. And then the last piece of the dialogue. So what are you going to get when you get that level of expertise from us and then, compared to what's in the industry, we're building and bringing a level of expertise that will cost astronomical prices for you to get all of us in a room on an engagement or to help you with the transportation. We're giving that to you in the palm of your hand, from an app on your phone or your iPad or on the computer.
Speaker 1:Nice and even the folks I would imagine that work for those big. You know firms, you know they probably stretch so thin. You know your services, you know white glove, you can reach out, you can touch, touch you guys. So I think the the true consultation approach and personalized GPT bill, I think that's a you know, definitely some firm selling points on your part. So great job with that.
Speaker 2:I appreciate it.
Speaker 1:No, I mean, yeah, I just think it's pretty cool. I mean I've, you know, worked for major corporations. I've been in leadership for over 20 years and when firms like EY and Kinsey these big firms come in, you know you can tell they just go and they fly it across the country. You know they're trying to sell you something. It's not really that personalized touch. You know that probably your company can, can offer. I do have a separate question because this is the Money Focus podcast, so I have a lot of people on that really talk about their journey and entrepreneurship. So talk to us a little bit about that time where you said you know what I want to start my own company. You know I'll be interested to understand that transition.
Speaker 2:Yes, for me, I've always wanted to work in business. Like I said, I worked a lot for the majority of my career in government, and then I was like I wanted to be able to have these other challenges that were out there, but then also, like I've always had this eye for understanding how to apply things that go on in the business world or in private sector into, like, how I led my teams within the government, which allowed me to study a lot there, which allowed me to study a lot there. And even going back to, I wrote a paper back in 2017 that looked at risk and foresight and how to actually identify those risks and be able to do predictive analysis of it, and so I was able. After leaving the FBI and going to UI, I basically went to B school for two and a half years. I learned a lot, ran a book of business about $130 million in sales there and delivered a lot of great work, met a lot of people, created a team that didn't exist from intelligence and analytics team there. But then also, what I noticed is basically what you just noticed, what you just said as well, was really around there's a gap in the market, right, like it's not a very high touch, even though it was very personal. When it comes to, like, some of the issues that companies are dealing with. A lot of the things aren't really tailored right, and that comes from the lights and from Dennis Fresh, operations Intel Officer or in the FBI where, like you, solve your problems yourself, and so my goal was really to figure out how to apply everything.
Speaker 2:I learned from understanding clean data, which was another gap. I saw that a lot of companies just didn't have clean data and they were making decisions off of. But then also the experts that were coming as experts in reality, were they really experts? Not to take anything away from anyone else that's in that industry, but you and I know if you're working in that seat or you're working in the business and you're responsible for those risks that come with those decisions. That's a different type of thing than when it comes in, when I give recommendations. So the goal is to empower organizations to oneers and fiduciaries of the spend that they have, but then also to be more efficient and to identify those risks that they might not be aware of, because now, as a leader, you have to worry about everything. So we call it the 21st century data complexity and overload of everything. All it warrants, and our organization gives you that in our platform and business gives you that clarity way to make those decisions Taylor specifically used.
Speaker 2:So that journey was really around wanting to do something, continue to do something larger than myself and empower people, and so that's what I've been doing for my entire career. It doesn't change with the mission or how we're building the business. We still care about people. Yep, absolutely. We want to make money too, but you can make money and do good too, and that's our internal model there from that standpoint.
Speaker 2:The other piece of it is it's really I would like to work in a corporate and then being an executive in the FBI. There's only so long. I wanted to basically do that for other people. All everything lined up, the blessings lined up, and I was able to be able to have this opportunity to take this leap. And it is a roller coaster every single day, whether you're talking about funding or if you're talking about identifying resources or hiring all of this stuff you have to consider and then getting clients, acquisition, all of that. It's a roller coaster, but I mean it's well.
Speaker 2:I used to tell people at the FBI I work 365 days a year, 24 seven. I do the same now, but it's well. I used to tell people at the FBI we're doing this 65 days a year, 24-7. I do the same now, but it's a different level. It's not the same type of stress, that type of stress that wore me down Every day, wore me out this one. It makes me tired, but it's fulfilling. Too right To see the progressions you have every week. Like I said, it is an emotional rollercoaster every day. Even though you get a small win, the next call might be like something completely different and it takes you to a different spot.
Speaker 1:So, yeah, yeah, I'm in your own business. I currently do both. I still have a corporate tour here and I have rental properties and I have my media businesses. So so I'm I'm kind of all over the place, but the most fulfilling is definitely the things that I've created and crafted and built on my own, even though I do have a great career, you know, but it's not the same. It just isn't. You know so. But I also think it's a mindset for an entrepreneur because, like you said, you know, lining up clients, some go through, some don't. You know, sometimes the funnel need to be adjusted. Yeah, so some people have to, some people are not really ready for that, and it's okay. It's okay. But if you do have the heart for it, you should jump in.
Speaker 2:No, I completely agree. If you can, if you have the heart and the mindset for it and the resilience and perseverance, you definitely should do it.
Speaker 1:What final thoughts or advice would you like to give to the audience regarding responsible AI and the future of it as well, because we're still on the front end of this AI wave to that they also close us out with you know. I know you mentioned the launch of your business is days away from when we're recording, but it'll, be for sure, out by the time I release. So tell us more about your business and how we can stay in touch with you. So the floor is yours, appreciate you, yeah, yeah.
Speaker 2:So one I would say from a responsible AI standpoint, like we were talking about earlier understand what the goals are that you would like to accomplish from that, but then also don't trust. But then don't trust everything you read and see. So be a good steward of doing your own research, verifying the information that's out there, but then finding good partners. If you're a business, I mean, reach out to us from a tasselingai standpoint, but then also understand that there is going to be a huge piece from a data. So right now we're talking about AI, and then we're talking about general AI and the next wave of what's on the future, of the kind of Sprite, and then you have quantum computing that's on top of that. So a lot of these things are really still built on the goal of the next 21st century, which is data. So if you're able to clean up your data and be able to understand and have it organized the right way and have the right frameworks and security around it, you're creating goal within your organization that you see people selling people's data off all the time right now but that's the biggest thing when we go into the 21st century. And then, from a Tasselinkai standpoint, so we can be found on Tasselinkai and Tasselinkcom, which is T-A-C-I-L-E-N-T. So it's tacit knowledge and resilient to buying. So tacit knowledge is basically the transfer of information that we're trying to do right now into our algorithm to give you that level of expertise and accuracy. And then resilient is really tied to one of my personal models within the organization. So we want to be a resilient organization that withstands the chaos of the 21st century and is partnered with you for the life of your organization. And so, yeah, like we really want to be able to change the game and I really it warmed my heart, like I was going to get carried away that when you you articulated what we're able to do from that white glove approach and applying AI to a lot of things, but then giving you that very and it's very our platform is very, very customizable, right? So, like the, our goal is for us to deal with your life, like I mentioned before, the life of your business. So, from start to finish, helping you with any type of transformation. All that can be done within our platform.
Speaker 2:When I was just talking with my chief information security officer before, this call was really around once you complete our strategy builder or our risk assessment, which is built on a lot of frameworks that people are familiar with, from NIST, from HIPAA, from all these other frameworks. You don't have to do another assessment again because we understand you from a strategic standpoint and we're not what you call a transactional risk tool, to where we need to be deep into your environment to look at controls and symptoms. I call all those other tools symptom tools and so we sit, as you identified already, which is at the strategic level, to where we're an alternative or to where you traditionally go to these consulting firms, and not to knock any of them. We actually partner with a lot of those firms as well, because we make their lives easier on the front end. We also make you a better user or any of those other following tools, because you know your environment the best and we empower you to do so.
Speaker 1:Nice, nice. Well, thank you so much. Like I said, you know, I know you're days away from your launch. Yeah, so everybody that's listening, you know, make sure to check out your website and I'll make sure to put that in the episode notes as well. But what about social media? One more thing before we go. So can we tap into you directly or the company on social media?
Speaker 2:Yeah, no, absolutely. I love to connect with anyone that has any questions about the business or just being an entrepreneur or any other crazy stuff I've done the last 20 years before. So I'm on LinkedIn it's Ron HR Johnson, and then the business has a page on Instagram and Facebook and Twitter. Then as well. Just the company, so Tasselin, so Tasselin, so Tasselin Tasselinai on LinkedIn and Facebook and Instagram Twitter as well.
Speaker 1:Like I said, yeah, I'm going to put it in the show notes so everybody go tap in, reach out to Ron directly with your questions and also support his business. So appreciate you so much.