Daily Tech Digest - May 04, 2024

We Need an Updated Strategy to Secure Identities

Identity needs to be foremost in any security strategy since we’ve ample evidence it remains a frequent target. Most breaches today originate with identity through human error, social engineering or phishing. Solutions providers like Cisco are offering cybersecurity tools that bring together the worlds of identity, networking and security to detect and prevent these identity threats. Solutions like Cisco Duo, Cisco Identity Intelligence, and Cisco Secure Access can minimize exposure. Cisco Duo protects access to applications and data with strong multi-factor authentication, while Cisco Secure Access emphasizes secure remote connectivity to prevent unsanctioned users from gaining access. Additionally, Cisco Identity Intelligence uses AI to analyze user behavior and identity data to proactively clean up vulnerable identities and to detect identity-based security threats. Most organizations use a variety of solutions collected over the years that now reside in the cloud, on premises or in hybrid environments. That’s why an platform approach is so important. It also needs to be easy to deploy and easy for end users to manage.


What is cybersecurity mesh architecture (CSMA)?

Cybersecurity mesh architecture (CSMA) is a set of organizing principles used to create an effective security framework. Using a CSMA approach means designing a security architecture that is composable and scalable with easily extensible interfaces, a common data schema and well-defined interfaces and APIs for interoperability. ... A CMSA proactively blocks attacks through a variety of controls and system design principles. Leveraging advanced machine learning for anomaly detection and employing Secure Access Service Edge (SASE) for dynamic, secure cloud access, CSMA ensures robust encryption standards for data at rest and in transit. Network segmentation and micro-segmentation, paired with continuous authentication and strict authorization, can restrict lateral movement. These components, alongside continuous compliance monitoring and risk management tools, orchestrate a multi-layered defense strategy that preempts cyber threats by dynamically adapting to the evolving security landscape and ensuring continuous protection against potential vulnerabilities and unauthorized access attempts.


Managing Digital Debt: Artificial Intelligence And Human Sustainability

Digital debt represents the time and energy spent managing digital tasks, impeding core job responsibilities. At the same time, while their employees are trying to manage digital communication and creative thinking, organizations are constantly chasing after cutting-edge software solutions to stay ahead in the competitive market. In that race, they are piling up their tech balance sheets with the lesser-known but omnipresent “technical debt.” ... Leaders face the daunting task of balancing short-term gains with long-term sustainability, promoting accountability and continuous improvement within their teams. Increasing digital debt hampers organizational agility, raises maintenance costs, heightens the risk of failures and diminishes employee morale, highlighting the imperative for effective leadership in managing debt accumulation. Rather than chasing the newest trends and platforms, leaders should focus on their employees and the ease of doing business not only for the customers but also employees. 


Enhancing Developer Experience for Creating AI Applications

Kuzniak mentioned that enhancing the developer experience is as crucial as improving user experience. Their goal is to eliminate any obstacles in the implementation process, ensuring a seamless and efficient development flow. They envisioned the ideal developer experience, focusing on simplicity and effectiveness: For the AI implementation, we’ve established key principles:Simplicity: enable implementation with just one line of code. Immediate Accessibility: allow real-time access to prompts without the need for deployment. Security and Quality: integrate security and quality management by design. Cost Efficiency: design cost management and thresholds into the system by default. Kuzniak mentioned that their organizational structures are evolving in the face of the technology landscapes. The traditional cross-functional teams comprising product managers, designers, and developers, while still relevant, may not always be the optimal setup for AI projects, as he explained: We should consider alternative organizational models.  


Code faster with generative AI, but beware the risks when you do

"Our experience is that [GenAI-powered] software coding tools aren't as security-aware and [attuned with] security coding practices," he said. For instance, developers who work for organizations in a regulated or data-sensitive environment may have to adhere to additional security practices and controls as part of their software delivery processes. Using a coding assistant can double productivity, but developers need to ask if they can adequately test the code and fulfill the quality requirements along the pipeline, he noted. It's a double-edged sword: Organizations must look at how GenAI can augment their coding practices so the products they develop are more secure, and -- at the same time -- how the AI brings added security risks with new attack vectors and vulnerabilities. Because it delivers significant scale, GenAI amplifies everything an organization does, including the associated risks, Shaw noted. A lot more code can be generated with it, which also means the number of potential risks increases exponentially.


It's the End of the Entrepreneurial Era As We Know It

Today, being an entrepreneur seems to be as easy as twiddling your thumbs and clicking (or swiping) on a few buttons on an app on a smartphone. Hard work? Unlikely! Just click the right settings or prompts and 'Voila!' let the machine do the hard work! Humans were born with the anatomy and physique to be hunters, gatherers, lumberjacks, climbers, and runners. We were blessed to be physically active and agile. Unfortunately, the human race has just been through an entire century of changing those mannerisms into becoming desk-bound, delivery-service complacent hermits. ... Is a person truly an entrepreneur, when all they did was click a button and the rest of it was automated? If they built the hardware, software, and automation themselves? Then in my eyes, it's clearly entrepreneurial. But if another created the machine and they used it, are they really an entrepreneur? Having produced and directed many TV shows exploring and exposing advanced tech and innovation positively, I am clearly bullish on our technologically supercharged future. 


'Architecture by conference' is a really bad idea

The role of a generative AI architect should go beyond merely applying existing technologies; it should involve pioneering new methodologies and pushing the boundaries of what’s possible. As leaders, we must foster a culture that not only encourages innovation but actively rewards it. Are we questioning established norms and continuously seeking opportunities to improve and innovate? Are we blindly following other people’s approaches to completely different business problems? It’s time to stop imitating architectural processes from hyperscaler conferences or reusing frameworks, spreadsheets, and slides developed for another project by whatever consulting firm. You need to get smart quickly and stop copying off other people’s papers. The journey toward exceptional generative AI architecture for use in or out of the cloud is challenging yet crucial. It requires a break from tradition, a commitment to deep customization, and a resolve to innovate. I wish I could tell you this is easy, but we’re about to embark on building core IT systems that will define the business’s value.


The slow burn of data egress fees

Despite the financial and technical barriers, some companies are undertaking “cloud repatriation,” where workloads are moved off the cloud and back on-premises. A UK-based study conducted by Citrix found that, of the 350 companies surveyed, a quarter have moved half or more of their cloud-based workloads back to their own infrastructure, or are considering doing so. Among the list of motivations, Citrix noted unexpected costs (33 percent of respondents), performance issues, security concerns, compatibility issues, and service downtime. Twenty-two percent listed financial concerns as the main motivation for repatriation. Fifty percent of respondents identified data transfer fees as a significant contributing factor to unexpected cloud costs. Omdia’s Hahn has seen more repatriation of late. “In the last year or two years, companies have been a lot more selective,” he says. “There used to be a trend of cloud first, cloud all in, but now it seems more like companies think, ‘okay, we’ve got these workloads, some of them make sense to go to the cloud, some make sense to go on-premise.’”


Coaching your IT team for change: 9 tips

“You need a clearly articulated ‘why,’” says Sharon Mandell, CIO of Juniper Networks. “Then you need to communicate, communicate, communicate. And if you think you’ve communicated enough, communicate some more.” People will still push back, she says. There will always be people who say, “We’ve always done it this way and it works just fine.” ... “Changes are most accepted if they are tied to mission and purpose,” says Jennifer Dulski, CEO and founder of software company Rising Team. “Every company has a vision, a mission, a set of values,” she says. If you tie the change to that, it won’t feel arbitrary or unnecessary. “Start by grounding it in your mission,” she says. “And be clear about how the benefits are tied to the mission.” Paulo Gardini Miguel, director of technology at The CTO Club, agrees. “Begin by painting the big picture,” he says. “Explain the rationale behind the change and demonstrate how it aligns with the organization’s goals. Highlight the benefits of the change for the team, the company, and the customers.” Whenever possible, leaders go deeper than the company’s stated mission, Dulski says.


Clean Data, Trusted Model: Ensure Good Data Hygiene for Your LLMs

“Garbage in, garbage out” has never rung truer than with LLMs. Just because you have vast troves of data to train a model doesn’t mean you should do so. Whatever data you use should have a reasonable and defined purpose. The fact is, some data is just too risky to input into a model. Some can carry significant risks, such as privacy violations or biases. It is crucial to establish a robust data sanitization process to filter out such problematic data points and ensure the integrity and fairness of the model’s predictions. In this era of data-driven decision-making, the quality and suitability of the inputs are just as vital as the sophistication of the models themselves. One method rising in popularity is adversarial testing on models. Just as selecting clean and purposeful data is vital for model training, assessing the model’s performance and robustness is equally crucial in the development and deployment stages. These evaluations help detect potential biases, vulnerabilities or unintended consequences that may arise from the model’s predictions. There’s already a growing market of startups specializing in providing services for precisely this purpose. 



Quote for the day:

''Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else.'' -- Dr. Ken Blanchard

No comments:

Post a Comment