Daily Tech Digest - July 25, 2024

7 LLM Risks and API Management Strategies to Keep Data Safe

Overloading an LLM with requests can cause poor service or increased resource costs, two of the worst outcomes for an organization. Yet with a model denial of service that is what’s at stake. This happens when attackers cause resource-heavy operations on LLMs. This could look like a higher-than-normal task generation or repeated long inputs, to name a few. Authentication and authorization can be used to prevent unauthorized users from interacting with the LLM. Rate limiting on the number of tokens per user should also be used to stop users from burning through an organization’s credits, incurring high costs and using large amounts of computation resulting in latency injection. ... Compliance teams’ concern about sensitive information disclosure is perhaps one of the most severe vulnerabilities limiting LLM adoption. This occurs when models inadvertently can return sensitive information, resulting in unauthorized data access, privacy violations and security breaches. One technique that developers can implement is using specially trained LLM services to identify and either remove or obfuscate sensitive data.


Michael Dell performed a ‘hard reset’ of his company so it could survive massive industry shifts and thrive again. Here’s how it’s done

A hard reset asks and answers a small set of critical strategy questions. It starts with revisiting your beliefs. Discuss and debate your updated beliefs with the team and build a plan to actively test the ones where you disagree or have the most uncertainty about. Next ask what it will take to build a defensible competitive advantage going forward: Determine if you still have a competitive advantage (you probably don’t—otherwise you wouldn’t be in a hard reset). Glean what elements you can use to strengthen and build an advantage going forward. Over-index on the assets you can strengthen and discuss what you will buy or build. Make sure you anchor this in your beliefs around where the world is going. ... During a hard reset, develop rolling three-month milestones set towards a six-month definition of success. Limit these milestones to ten or fewer focused tasks. Remember you are executing these milestones while continuing the reset process and related discussions, so be realistic with what you can achieve and avoid including mere operational tactics on the milestone list.


Software testing’s chaotic conundrum: Navigating the Three-Body Problem of speed, quality, and cost

Companies that prioritize speed over quality end up with the choice of whether to release to market anyway, and risk reputational damage and client churn, or push back timelines and go over budget trying to retrofit quality (which isn’t really possible, by the way). ... Quality is the cornerstone of successful digital products. Users expect software to function reliably, deliver on its promises and provide a seamless user experience. Comprehensive testing plays a large role in making sure users are not disappointed. Developers need to look beyond basic functional testing and consider aspects like accessibility, payments, localisation, UX and customer journey testing. However, investing heavily in testing infrastructure, employing skilled QA engineers and rigorously testing every feature before release is expensive and slow. ... Quality engineers are limited by budget constraints, which can affect everything from resource allocation to investments in tooling. However, underfunding quality efforts can have disastrous effects on customer satisfaction, revenues and corporate reputation. To deliver competitive products within a reasonable timeframe, quality managers need to use available budgets as efficiently as possible. 


Cloud security threats CISOs need to know about

An effective cloud security incident response plan details preparation, detection and analysis, containment, eradication, recovery and post-incident activities. Preparation involves establishing an incident response team with defined roles, documented policies, necessary tools and a communication plan for stakeholders. Detection and analysis require continuous monitoring, logging, threat intelligence, incident classification and forensic analysis capabilities. Containment strategies and eradication processes are essential to prevent the spread of incidents and eliminate threats, followed by detailed recovery plans to restore normal operations. Post-incident activities include documenting actions, conducting root cause analysis, reviewing lessons learned, and updating policies and procedures. ... Organizations should start by doing a comprehensive risk assessment to identify critical assets and evaluate potential risks, such as natural disasters and cyberattacks. Following the assessment, develop and document DR and BC procedures. Annually review and update the procedures to reflect changes in the IT environment and emerging threats.


Artificial Intelligence Versus the Data Engineer

So, how does AI change the role of the data engineer? Firstly, the role of the data engineer has always been tricky to define. We sit atop a large pile of technology, most of which we didn’t choose or build, and an even larger pile of data we didn’t create, and we have to make sense of the world. Ostensibly, we are trying to get to something scientific. A number, a chart, a result that we can stand behind and defend—but like all great science, getting there also needs a bit of art. That art comes in the form of the intuition required to sift through the data, understand the technology, and rediscover all the little real-world nuances and history that over time have turned some lovely clean data into a messy representation of the real world. ... What’s exciting for us beleaguered data engineers is that AI is showing great ability to be a very helpful tool for these hard-to-master skills that will ultimately make us better and more productive at our jobs. We have all, no doubt, seen all the great advancements in AI’s ability to take plain text queries and turn them into increasingly complex SQL, thus lightening the load of remembering all the advanced syntax for whichever data platform is in vogue.


CrowdStrike crash showed us how invasive cyber security software is. Is there a better way?

In the wake of this incident it’s worth considering whether the tradeoffs made by current EDR technology are the right ones. Abandoning EDR would be a gift to cyber criminals. But cyber security technology can – and should – be done much better. From a technical standpoint, Microsoft and CrowdStrike should work together to ensure tools like Falcon operate at arm’s length from the core of Microsoft Windows. That would greatly reduce the risk posed by future faulty updates. Some mechanisms already exist that may allow this. Competing technology to CrowdStrike’s Falcon already works this way. To protect user privacy, EDR solutions should adopt privacy-preserving methods for data collection and analysis. Apple has shown how data can be collected at scale from iPhones without invading user privacy. To apply such methods to EDR, though, we’ll likely need new research. More fundamentally, this incident raises questions about why society continues to rely on computer software that is so demonstrably unreliable. 


6 Pillars Of Entrepreneurial Mastery: Elevating Your Business Through Lifelong Learning

Entrepreneurs with a growth mindset understand that abilities and intelligence can be developed through dedication and hard work. This perspective fosters resilience, helping to navigate setbacks and failures with a constructive attitude. By viewing challenges as opportunities for growth, you can become more adaptable and willing to take calculated risks. Regular self-reflection, seeking feedback and staying open to new ideas are essential practices for cultivating this mindset. ... As an entrepreneur, continuously educate yourself on tax regulations, funding options and financial management best practices. Engaging with online courses, workshops and financial mentors can provide valuable insights and help stay abreast of emerging trends. ... In today's digital age, technology is a major driver of business innovation and efficiency. Entrepreneurs must stay informed about the latest technological advancements relevant to their industry. This encompasses the implementation and utilization of new software, tools, and platforms to streamline operations, enhance productivity, and improve customer experiences.


Software Architecture in an AI World

Programming isn’t software architecture, a discipline that often doesn’t require writing a single line of code. Architecture deals with the human and organizational side of software development: talking to people about the problems they want solved and designing a solution to those problems. That doesn’t sound so hard, until you get into the details—which are often unspoken. Who uses the software and why? How does the proposed software integrate with the customer’s other applications? How does the software integrate with the organization’s business plans? How does it address the markets that the organization serves? Will it run on the customer’s infrastructure, or will it require new infrastructure? On-prem or in the cloud? How often will the new software need to be modified or extended? ... Every new generation of tooling lets us do more than we could before. If AI really delivers the ability to complete projects faster—and that’s still a big if—the one thing that doesn’t mean is that the amount of work will decrease. We’ll be able to take the time saved and do more with it: spend more time understanding the customers’ requirements, doing more simulations and experiments, and maybe even building more complex architectures.


Edge AI: Small Is the New Large

The technologies driving these advancements include AI-enabled chips, NPUs, embedded operating systems, the software stack and pre-trained models. Collectively, they form a SoC - system on chip. Software, hardware and applications are key to enabling an intelligent device at the edge. The embedded software stack in the chip brings it all together and makes it work. Silicon Valley-based embedUR specializes in creating software stacks for bespoke edge devices, acting as a "software integrator" that collaborates closely with chip manufacturers to build custom solutions. "We have the ability to build managed software, as well as build individual software stacks for small, medium and large devices. You can think of us as a virtual R&D team," Subramaniam said. ... OpenAI released a smaller version of the ChatGPT language model called GPT-4o mini, set to be 60% cheaper than GPT-3.5. But smaller does not mean less powerful, in terms of AI processing. Despite their smaller size, SMLs possess substantial reasoning and language understanding capabilities. For instance, Phi-2 has 2.7 billion parameters, Phi-3 has 7 billion, and Phi-3 mini has 3.8 billion.


Reflecting on Serverless: Current State, Community Thoughts, and Future Prospects

The great power of serverless is that starting with and becoming productive is much easier. Just think how long it would take a developer who has never seen either Lambda or Kubernetes to deploy a Hello World backend with public API on both. As you start building more realistic production applications, the complexity increases. You must take care of observability, security, cost optimization, failure handling, etc. With non-serverless, this responsibility usually falls on the operations team. With serverless, it usually falls on developers, where there is considerable confusion. ... Issues like serverless testing, serverless observability, learning to write a proper Lambda handler, dealing with tenant isolation, working with infrastructure as code tools (too many AWS options—SAM, CDK, Chalice, which one to choose and why?), and learning all the best practices overwhelm developers and managers alike. AWS has published articles on most topics, but there are many opinions, too many 'hello world' projects that get deprecated within six months, and not enough advanced use cases. 



Quote for the day:

"You are the only one who can use your ability. It is an awesome responsibility." -- Zig Ziglar

Daily Tech Digest - July 24, 2024

AI generated deepfake fraud drives public appetite for biometrics: FIDO Alliance

“People don’t need to be tech-savvy, the tools are easily accessible online. Deepfakes are as easy as self-service, and this accessibility introduces a significant risk to organizations. How can financial institutions protect themselves against, well, themselves?” The answer, he says, is reliable biometric detection capable of running digital video against biometrically captured data to weed out digital replicas. “Protecting against deepfakes includes layering your processes with multiple checks and balances, all designed to make it increasingly complicated for fraudsters to pull off a successful scam.” For user identity and accessibility checks, he says it is essential to offer “seamless biometric identity verification systems that don’t feel intrusive but do offer increased trust.” “Companies need a strict onboarding process that asks for both biometric and physical proof of identity; that way, security systems can immediately verify someone’s identity. This includes the use of liveness detection and deepfake detection – ensuring a real person is at the end of the camera – and ensuring secure and accurate information authentication and encryption.”


The State of DevOps in the Enterprise

Unfortunately, few if any sites have fully automated DevOps solutions that can keep pace with Agile, no-code and low-code application development -- although everyone has a vision of one day achieving improved infrastructure automation for their applications and systems. ... Infrastructure as code is a method that enables IT to pre-define IT infrastructure for certain types of applications that are likely to be created. By predefining and standardizing the underlying infrastructure components for running new applications on Linux, for instance, you can ensure repeatability and predictability of performance of any application deployed on Linux, which will speed deployments. ... If you’re moving to more operational automation and methods like DevOps and IaC that serve as back-ends to applications in Agile, no code and low code, cross-disciplinary teams of end users, application developers, QA, system programmers, database specialists and network specialists must team together in an iterative approach to application development, deployment and maintenance.


A Blueprint for the Future: Automated Workflow Design

Given the multitude of processes organizations manage, the ability to edit existing workflows or start not from scratch but from a best practice template, assisted by generative AI, holds good potential. I believe this represents another significant step toward enterprise autonomy. This is apt as Blueprint nicely fits into Pega’s messaging that is centred on the concept of the autonomous enterprise. ... In the future, we could see Process Intelligence (PI) integrated with templates and generative AI, pushing the automation of the design process even further. PI identifies which workflows need improving and where. By feeding these insights into an intelligent workflow design tool like Blueprint, we could eventually see workflows being automatically updated to resolve the identified issues. Over time, we might even reach a point where a continuous automated process improvement cycle can be established. This cycle would start with PI capturing insights and feeding them into a Blueprint-like tool to generate updated and improved workflows. These would then be fed into an automated test and deployment platform to complete the improvement, overseen by a supervising AI or human. 


Considerations for AI factories

The new way of thinking is that the “rack is the new server” enables data center operators to create a scalable solution by thinking at the rack level. Within a rack, an entire solution for AI training can be self-contained, with expansion for higher needs for performance readily available. A single rack can contain up to eight servers, each with eight interconnected GPUs. Then, each GPU can communicate with many other GPUs located in the rack, as the switches can be contained in the rack. The same communication can be set up between racks for scaling beyond a single rack, enabling a single application to use thousands of GPUs. Within an AI factory, different GPUs can be used. Not all applications or their agreed-upon SLAs require the fastest GPUs on the market today. Less powerful GPUs may be entirely adequate for many environments and will typically consume less electricity. In addition, these very dense servers with GPUs require liquid cooling, which is optimal if the coolant distribution unit (CDU) is also located within the rack, which reduces the hose lengths.


5 Agile Techniques To Help Avoid a CrowdStrike-Like Issue

Agile is exceptionally good at giving a safe playpen to look around a project, for issues the team may not have focused on initially. It channels people’s interest in areas without losing track of resources. By definition, no one in an organization will spend time considering the possible outcome of things that they have no experience of. However, by pushing on the boundaries of a project, even if based only on hunches or experience, insights arrive. Even if the initial form of a problem cannot be foreseen, the secondary problems can often be. ... The timebox correctly assumes that if a solution requires jumping down a deep rabbit hole, then the solution may not be applicable in the time constraints of the project. This is a good way to understand how no software is an “ultimate solution,” but simply the right way to do things for now, given the resources available. ... Having one member of a team question another member is healthy, but can also create friction. Sometimes the result is just an additional item on a checklist, but sometimes it can trigger a major rethink of the project as a whole.


How to review code effectively: A GitHub staff engineer’s philosophy

Code reviews are impactful because they help exchange knowledge and increase shipping velocity. They are nice, linkable artifacts that peers and managers can use to show how helpful and knowledgeable you are. They can highlight good communication skills, particularly if there’s a complex or controversial change needed. So, making your case well in a code review can not only guide the product’s future and help stave off incidents, it can be good for your career. ... As a reviewer, clarity in communication is key. You’ll want to make clear which of your comments are personal preference and which are blockers for approval. Provide an example of the approach you’re suggesting to elevate your code review and make your meaning even clearer. If you can provide an example from the same repository as the pull request, even better—that further supports your suggestion by encouraging consistent implementations. By contrast, poor code reviews lack clarity. For example, a blanket approval or rejection without any comments can leave the pull request author wondering if the review was thorough. 


Goodbye? Attackers Can Bypass 'Windows Hello' Strong Authentication

Smirnov says his discovery does not indicate that WHfB is insecure. "The insecure part here is not regarding the protocol itself, but rather how the organization forces or does not force strong authentication," he says. "Because what's the point of phishing-resistant authentication if you can just downgrade it to something that is not phishing-resistant?" Smirnov maintains that because of how the WHfB protocol is designed, the entire architecture is phishing resistant. "But since Microsoft, back at the time, had no way to allow organizations to enforce sign-in using this phishing-resistant authentication method, you could always downgrade to a lesser secure authentication method like password and SMS-OTP," Smirnov says. When a user initially registers Windows Hello on their device, the WHiB's authentication mechanism creates a private key credential stored in the computer's TPM. The private key is inaccessible to an attacker because it is sandboxed on the TPM, therefore requiring an authentication challenge using a Windows Hello-compatible biometric key or PIN as a sign-in challenge.


Cybersecurity ROI: Top metrics and KPIs

The overall security posture of an organization can be quantified by tracking the number and severity of vulnerabilities before and after implementing security measures. A key indicator is the reduction in remediation activities while maintaining or improving the security posture. This can be measured in terms of work hours or effort saved. Traditional metrics for this measurement include the number of detected incidents, Mean Time to Detect (MTTD), Mean Time to Respond (MTTR), and patch management (average time to deploy fixes). Awareness training and measuring phishing success rates are also crucial. ... Evaluating the cost-effectiveness of risk mitigation strategies is paramount. This includes comparing the costs of various security measures against the potential losses from security incidents and tying that figure back to patch management, paired up against the number of vulnerabilities remediated. With modern programs, enterprises are empowered to remediate what matters most from a risk perspective. All in all, a remediation cost is a better measure of an organization’s overall security posture than the cost of an incident.


Agentic AI drives enterprises away from public clouds

Decoupled and distributed systems running AI agents require hundreds of lower-powered processors that need to run independently. Cloud computing is typically not a good fit for this. However, it can still be a node within these distributed AI agents that run on heterogeneous and complex deployments outside public cloud solutions. The ongoing maturation of agentic AI will further incentivize the move away from the public cloud. Enterprises will increasingly invest in dedicated hardware tailored to specific AI tasks, from intelligent Internet of Things devices to sophisticated on-premises servers. This transition will necessitate robust integration frameworks to ensure seamless interaction between diverse systems, optimizing AI operations across the board. ... Integrating agentic AI marks a significant pivot in enterprise strategy, driving companies away from public cloud solutions. By adopting non-public cloud technologies and investing in adaptable, secure, and cost-efficient infrastructure, enterprises can fully leverage the potential of agentic AI. 


Learn About Data Privacy and How to Navigate the Information Security Regulatory 

LandscapeRegulators have made it clear that they are actively monitoring compliance with new state privacy laws. Even if the scope of exposure is relatively low due to partial exemptions, documenting compliance can be key. While companies are struggling to keep up with the expanding patchwork, regulators are also struggling to find the manpower to investigate the huge scope of companies coming under their jurisdiction. ... With the continual rise in cyber threats and a constantly evolving regulatory landscape for data privacy and information security, staying on top of and complying with such obligations and ensuring robust measures to protect sensitive information remain critical priorities. ... Numerous international data protection laws also impact the timeshare industry, but these are the primary laws affecting American resorts. Additionally, the timeshare industry is subject to other sector-related regulations, such as the Payment Card Industry Data Security Standard (PCI DSS), which sets requirements for securing payment card information for any business that processes credit card transactions. 



Quote for the day:

“When we give ourselves permission to fail, we, at the same time, give ourselves permission to excel.” --Eloise Ristad

Daily Tech Digest - July 23, 2024

Transforming GRC Landscape with Generative AI

Streamlining GRC workflows and integrating various components of the technology stack can significantly enhance efficiency. Apache Airflow is an open-source workflow automation tool that orchestrates complex data pipelines and automates GRC processes, leading to substantial efficiency gains. Apache Camel facilitates integration between different system components, ensuring smooth data flow across the technology stack. Additionally, robotic process automation (RPA) can be implemented using open-source platforms like Robot Framework. These platforms automate repetitive tasks within GRC processes, further enhancing operational efficiency and allowing human resources to focus on more strategic activities. By leveraging these open-source tools and techniques, organizations can build a robust infrastructure to support GenAI and RAG in their GRC processes, achieving enhanced efficiency, accuracy, and strategic insights. ... Traditional approaches are labour-intensive and prone to human error, leading to inefficiencies and increased compliance risks. By contrast, GenAI and RAG can streamline processes, reduce the burden on human resources, and provide timely and accurate information for strategic planning. 


Two AI Transparency Concerns that Governments Should Align On

AI raises two fundamental transparency concerns that have gained in salience with the spread of generative AI. First, the interaction with AI systems increasingly resembles human interaction. AI is gradually developing the capability of mimicking human output, as evidenced by the flurry of AI-generated content that bears similarities to human-generated content. The “resemblance concern” is thus that humans are left guessing: Is an AI system in use? Second, AI systems are inherently opaque. Humans who interact with AI systems are often in the dark about the factors and processes underlying AI outcomes. The “opacity concern” is thus that humans are left wondering: How does the AI system work? ... Regulatory divergence presents a unique opportunity for governments to learn from each other. Governments can draw from the expertise accumulated by national regulators and other governments that are experimenting to find effective AI rules. For example, governments looking to establish information rights can learn from Brazil’s precise elaboration of information to be disclosed, South Korea’s detailed procedure for requesting information, and the EU’s unique exception mechanisms.


5 IT risks CIOs should be paranoid about

CIOs sitting on mounting technical debt must turn paranoia into action plans that communicate today’s problems and tomorrow’s risks. One approach is to define and seek agreement of non-negotiables with the board and executive committee, outlining criteria of when upgrading legacy systems must be prioritized above other business objectives. ... CIOs should be drivers of change — which can create stress — while taking proactive and ongoing steps to reduce stress in their organization and across the company. The risks of burnout mount because of higher business expectations of delivering new technology capabilities, leading change management activities, and ensuring systems are operational. CIOs should promote ways to disconnect and reduce stress, such as improving communications, simplifying operations, and setting realistic objectives. ... “When considering the growing number of global third parties organizations need to collaborate with, protecting the perimeter with traditional security methods becomes ineffective the moment the data leaves the enterprise,” says Vishal Gupta, CEO & co-founder of Seclore.


Understanding the difference between competing AI architectures

A common misconception is that AI infrastructure can just be built to the NVIDIA DGX reference architecture. But that is the easy bit and is the minimum viable baseline. How far organizations go beyond that is the differentiator. AI cloud providers are building highly differentiated solutions through the application of management and storage networks that can dramatically accelerate the productivity of AI computing. ... Another important difference to note with regards AI architecture versus traditional storage models is the absence of a requirement to cache data. Everything is done by direct request. The GPUs talk directly to the disks across the network, they don't go through the CPUs or the TCP IP stack. The GPUs are directly connected to the network fabric. They bypass most of the network layers and go directly to the storage. It removes network lag. ... Ultimately, organisations should partner with a provider they can rely on. A partner that can offer guidance, provide engineering and support. Businesses using cloud infrastructure are doing so to concentrate on their own core differentiators. 


How Much Data Is Too Much for Organizations to Derive Value?

“If data is in multiple places, that is increasing your cost,” points out Chris Pierson, founder and CEO of cybersecurity company BlackCloak. Enterprises must also consider the cost of maintenance, which could include engineering and program analyst time. Beyond storage and maintenance costs, data also comes with the potential cost of risk. Threat actors constantly look for ways to access and leverage the data safeguarded by enterprises. If they are successful, and many are, enterprises face a cascade of potential costs. ... Once an enterprise is able to wrap its arms around data governance, leaders can start to ask questions about what kind of data can be deleted and when. The simple answer to the question of how much is too much boils down to value versus risk. “Start with the fundamental question: What does the company get from the data? Does it cost more to store and protect that data than the data actually provides to the organization?” says Wall. When it comes to retention, consider why data is being collected and how long it is needed. “If you don't need the data, don't collect it. That should always be the first fundamental rule,” says Pierson.


Empowering Developers in Code Security

When your team is ready to add security earlier in the development process, we suggest introducing 'guardrails' into their workflow. Guardrails, unlike wholly new processes, can slide into place unobtrusively, providing warnings about potential security issues only when they are actionable and true positives. Ideally, you want to minimize friction and enable developers to deliver safer, better code that will pass tests down the line. One tool that is almost universal across development and DevOps teams is Git. With over 97% of developers using Git daily, it is a familiar platform that can be leveraged to enhance security. Built directly into Git is an automation platform called Git Hooks, which can trigger just-in-time scanning at specific stages of the Git workflow, such as right before a commit is made. By catching issues before making a commit and providing direct feedback on how to fix them, developers can address security concerns with minimal disruption. This approach is much less expensive and time-consuming than addressing issues later in the development process. This can actually increase the time spent on new code by reducing the amount of maintenance that eventually needs to be done.


Retrieval-augmented generation refined and reinforced

RAG strengthens the application of generative AI across business segments and use cases throughout the enterprise, for example code generation, customer service, product documentation, engineering support, and internal knowledge management. ... The journey to industrializing RAG solutions presents several significant challenges along the RAG pipeline. These need to be tackled for them to be effectively deployed in real-world scenarios. Basically, a RAG pipeline consists of four standard stages — pre-retrieval, retrieval, augmentation and generation, and evaluation. Each of these stages presents certain challenges that require specific design decisions, components, and configurations. At the outset, determining the optimal chunking size and strategy proves to be a nontrivial task, particularly when faced with the cold-start problem, where no initial evaluation data set is available to guide these decisions. A foundational requirement for RAG to function effectively is the quality of document embeddings. Guaranteeing the robustness of these embeddings from inception is critical, yet it poses a substantial obstacle, just like the detection and mitigation of noise and inconsistencies within the source documents. 


Confidential AI: Enabling secure processing of sensitive data

Confidential AI is the application of confidential computing technology to AI use cases. It is designed to help protect the security and privacy of the AI model and associated data. Confidential AI utilizes confidential computing principles and technologies to help protect data used to train LLMs, the output generated by these models and the proprietary models themselves while in use. Through vigorous isolation, encryption and attestation, confidential AI prevents malicious actors from accessing and exposing data, both inside and outside the chain of execution. ... Confidential AI can also enable new or better services across a range of use cases, even those that require activation of sensitive or regulated data that may give developers pause because of the risk of a breach or compliance violation. This could be personally identifiable user information (PII), business proprietary data, confidential third-party data or a multi-company collaborative analysis. This enables organizations to more confidently put sensitive data to work, as well as strengthen protection of their AI models from tampering or theft.


Women in IT Security Lack Opportunities, Not Talent

Female leaders are also instrumental in advocating for policies and practices that promote diversity and inclusion, such as equitable hiring practices, sponsorship programs, and family-friendly policies. "By actively working to create a more inclusive environment, female cyber leaders can help pave the way for future generations of women in cybersecurity," Dohm said. ... Guenther noted that women often encounter unconscious biases that affect decisions regarding leadership potential and technical capabilities, particularly as it relates to perception bias. "Women in cybersecurity, as in many other fields, often face double standards in how their actions and words are perceived compared to their male counterparts," she said. For example, assertiveness, decisiveness, and direct communication – qualities praised in male leaders – can be unfairly labeled as aggressive or overly emotional when exhibited by women. This disparity in perception can hinder women from being seen as potential leaders or being evaluated fairly. "Addressing these biases is crucial for creating a truly equitable workplace where everyone is judged by the same standards and behaviors are interpreted consistently, regardless of gender," Guenther said.


Early IT takeaways from the CrowdStrike outage

Recovering from CrowdStrike has been an all-hands-on-deck event. In some instances, companies have needed humans to be able to touch and reboot impacted machines in order to recover — an arduous process, especially at scale. If you have outsourced IT operations to managed service providers, consider that those MSPs may not have enough staff on hand to mitigate your issues along with those of their other clients, especially when a singular event has widespread fallout. ... Ensure you review recovery steps and processes on a regular basis to guarantee that your team knows exactly where those recovery keys are and what processes are necessary to obtain them. While Bitlocker is often mandated for compliance reasons, it also adds a layer of complications you may not be prepared for. ... It was also quickly identified what the underlying culprit was, a CrowdStrike update that went faulty. In other incident situations, you may not be so quickly informed. It may not be clear what has happened and what assets have been impacted. Often, you’ll need to reach out to staff who are closely working with impacted assets to determine what is going on and what actions to take. 



Quote for the day:

"Effective questioning brings insight, which fuels curiosity, which cultivates wisdom." -- Chip Bell

Daily Tech Digest - July 22, 2024

AI regulation in peril: Navigating uncertain times

Existing laws are often vague in many fields, including those related to the environment and technology, leaving interpretation and regulation to the agencies. This vagueness in legislation is often intentional, for both political and practical reasons. Now, however, any regulatory decision by a federal agency based on those laws can be more easily challenged in court, and federal judges have more power to decide what a law means. This shift could have significant consequences for AI regulation. Proponents argue that it ensures a more consistent interpretation of laws, free from potential agency overreach. However, the danger of this ruling is that in a fast-moving field like AI, agencies often have more expertise than the courts. ... The judicial branch has no such existing expertise. Nevertheless, the majority opinion said that “…agencies have no special competence in resolving statutory ambiguities. Courts do.” ... Going forward, then, when passing a new law affecting the development or use of AI, if Congress wished for federal agencies to lead on regulation, they would need to state this explicitly within the legislation. Otherwise, that authority would reside with the federal courts. 


Fostering Digital Trust in India's Digital Transformation journey

In this era where digital interactions dominate, trust is the anchor for building resilient organizations and stronger relationships with stakeholders and customers. As per ISACA’s State of Digital Trust 2023 research, 90 percent of respondents in India say digital trust is important and 89 percent believe its importance will increase in the next five years. Nowhere is this truer than in India, the world’s largest digitally connected democracy and a burgeoning hub of digital innovation and transformation. ... A key hurdle in building and maintaining digital trust in most countries is the absence of a standardized conceptual framework for measurement and access to reliable internet infrastructure and digital literacy. In India’s case, with a rapidly expanding digital footprint comes an equally high threat of issues such as lack of funding, unavailability of technological resources, shortage of skilled workforce, lack of alignment between digital trust and enterprise goals, inadequate governance mechanisms, the spread of misinformation through social media, etc. leading to financial fraud and data theft. 


Tech debt: the hidden cost of innovation

While tech debt may seem like an unavoidable cost for any business heavily investing in innovation, delving deeper into its causes can reveal issues that may derail operations entirely. Many organisations struggle to find a solution, as the time required for risk analysis can seem unfeasible. Yet, by recognising early signs, businesses can leverage the right tools and find the right partners to facilitate a low-risk and controlled modernisation of legacy systems. Any IT modernisation program requires a strategic, evidence-based approach, starting with a rigorous fact-finding process to identify opportunities and inefficiencies within legacy systems. ... Making a case for modernisation requires articulating the expected benefits, costs and challenges beforehand. This begins with a comprehensive analysis that identifies existing system functionality and data against business and technical requirements, highlighting any gaps or challenges. ... In extreme situations, it may be necessary to replace an entire system. This is always the last resort due to the large investment needed and the disruption it can cause. 


Fake Websites, Phishing Surface in Wake CrowdStrike Outage

These fake sites often promise quick fixes or falsely offer cryptocurrency rewards to lure visitors into accessing malicious content. George Kurtz, CEO of CrowdStrike, emphasized the importance of using official communication channels, urging customers to be wary of imposters. "Our team is fully mobilized to secure and stabilize our customers' systems," Kurtz said, noting the significant increase in phishing emails and phone calls impersonating CrowdStrike support staff. Imposters have also posed as independent researchers selling fake recovery solutions, further complicating efforts to resolve the outage. Rachel Tobac, founder of SocialProof Security, warned about social engineering threats in a series of tweets on X, formerly Twitter. "Criminals are exploiting the outage as cover to trick victims into handing over passwords and other sensitive codes," Tobac warned. She advised users to verify the identity of anyone requesting sensitive information. The surge in cybercriminal activity in the wake of the outage follows a common tactic used by cybercriminals to exploit chaotic situations.


Under-Resourced Maintainers Pose Risk to Africa's Open Source Push

To shore up security and avoid the dangers of under-resourced projects, companies have a few options, all starting with determining which OSS their developers and operations rely on. To that end, software bills of materials (SBOMs) and software composition analysis (SCA) software can help enumerate what's in the environment, and potentially help trim down the number of packages that companies need to check, verify, and manage, says Chris Hughes, chief security adviser for software supply chain security firm Endor Labs. "There's simply so much software, so many projects, so many libraries, that the idea of ... monitoring them all actively is just — it's very hard," he says. Finally, educating developers and package managers on how to produce and manage code securely is another area that can produce significant gains. The OpenSSF, for example, has created a free course LFD 121 as part of that effort. "We'll be building a course on security architectures, which will also be released later this year," OpenSSF's Arasaratnam says. "As well as a course on security for not just engineers, but engineering managers, as we believe that's a critical part of the equation."


Cross-industry standards for data provenance in AI

Knowing the source and history of datasets can help organizations better assess their reliability and suitability for training or fine-tuning AI models. This is crucial because the quality of training data directly affects the performance and accuracy of AI models. Understanding the characteristics and limitations of the training data also allows for a better assessment of model performance and potential failure modes. ... As AI regulations such as the EU AI Act evolve, data provenance becomes increasingly important for demonstrating compliance. It allows organizations to show that they use data appropriately and align with relevant laws and regulations. ... Organizations should start by reviewing the standards documentation, including the Executive Overview, use case scenarios, and technical specifications (available in GitHub). Launching a proof of concept (PoC) with a data provider is recommended to build internal confidence. Organizations lacking resources or deploying a PoC “light” may opt to use our metadata generator tool to create and access standardized metadata files


Why an Agile Culture Is Critical for Enterprise Innovation

In the end, embracing agility isn’t just about staying afloat in the turbulent waters of AI innovation; it’s about turning those waves into opportunities for growth and transformation. Because in this ever-evolving landscape, the businesses that thrive will be the ones that are flexible, responsive, and always ready to adapt to whatever comes next. Which brings me to my next point – you need to start loving failure. This requires a whole reframe because in the world of AI, getting things wrong can actually be the fastest way to get things right. Most companies are so scared of getting it wrong that they never try anything new and are frozen like a deer in headlights. In AI, that’s a death sentence. ... Be prepared for resistance. Change is scary, and you’ll always have a few “blockers” who are negative in their approach. These are the people you need to win over the most. In the meantime, you just need to weather the storm. Lastly, remember that becoming agile is a journey, not a destination. It’s about creating a mindset of continuous improvement. Always in beta? That’s absolutely fine and in the fast-paced world of AI, that’s exactly where you want to be.


The Rise of Cybersecurity Data Lakes: Shielding the Future of Data

Beyond real-time threat detection and analysis, cybersecurity data lakes offer organizations a powerful platform for vulnerability prediction and risk assessment. By examining past incidents, organizations can uncover trends and commonalities in security breaches, weak points in their defenses, and recurring threats. Cybersecurity data lakes store vast amounts of data spanning extended periods, which is a rich source of information for identifying recurring vulnerabilities or attack vectors. With techniques such as time-series analysis and pattern recognition, organizations can uncover historical vulnerability patterns through rigorous testing and use this knowledge to anticipate and mitigate future risks. In fact, this is one of the reasons why the global pentesting market is expected to rise to a value of $5 billion by 2031, with more innovative approaches like blackbox pentesting to exploit hidden attack vectors and using AI for vulnerability assessment (VAS) to improve efficiency. When combined with other vulnerability assessment methods like threat modeling and red team exercises, predictive modeling can also help organizations identify potential attack paths and attack surface areas and proactively implement defensive measures.


Internships can be a gold mine for cybersecurity hiring

Though an internship can pay off for an employer in the form of a fresh crop of talent to hire, it requires the company to invest time, planning, oversight, and resources. Designating one or more people to manage the process internally can make things easier for the organization. “Sit down with the supervisory personnel so they understand what that position is being advertised for, what the expected outcomes are and how to manage that intern, the program needs, and how they have to report [on that intern],” ... If possible, Smith recommends mentoring an intern, not simply ticking off a bureaucratic checklist of their tasks: “I do fervently believe you essentially need a sponsor, someone who’s going to take the intern under his or her wing and nurture that relationship, nurture that person.” Chiasson warns employers to manage their own expectations as carefully as they manage the interns themselves. Rather than expecting a unicorn to show up — an intern with one or more degrees, several technical certifications and other prior workplace experience — she urges companies to “take them on and then train them based on what you require.”


Desirable Data: How To Fall Back In Love With Data Quality

With so much data being pumped out at breakneck rates, it can seem like an insurmountable challenge to ensure data accuracy, completeness, and consistency. And despite technological, governance and team efforts, poor data can still endure. As such, maintaining data quality can feel like a perennial challenge. But quality data is fundamental to a company’s digital success. In order to create a business case for embracing data quality, you have to, firstly, demonstrate the far-reaching consequences of poor data quality on organisational performance. If you can present the problem from a business standpoint — backed by evidence and real-world scenarios of data quality issues leading to incurred costs, reputational risk, and uncapitalised opportunities — you can implement proactive measures and trigger a desire by top-level management to adapt processes. To bring your case to life, you then have to find ways of quantifying the business impact of data quality issues. This could take the form of illustrating the effect of bad data on a marketing campaign, showing the difference with and without data quality in relation to usable records, sales leads, and how this impacts your revenue.



Quote for the day:

"Defeat is not bitter unless you swallow it." -- Joe Clark

Daily Tech Digest - July 20, 2024

CrowdStrike’s IT outage makes it clear why cyber resilience matters

“This was not a code update. This was actually an update to content. And what that means is there’s a single file that drives some additional logic on how we look for bad actors. And this logic was pushed out and caused an issue only in the Microsoft environment,” CrowdStrike CEO and founder George Kurtz told Jim Cramer during an interview on CNBC earlier today. Trustwave CISO Kory Daniels recently said that “boards have begun asking the question: Is it important to have a formally titled chief resilience officer?” VentureBeat has learned that more boards of directors are adding cyber resilience to their broader risk management project teams. High-profile ransomware attacks that create chaos across supply chains are among the most costly for any business to withstand, as the United Healthcare breach makes clear. Outages caused by misconfigurations highlight the need for a unique form of cyber resilience so actively pursued that it becomes a core part of a company’s DNA. Misconfigured updates will continue to cause global outages. That goes with the territory of an always-on, real-time world defined by intricate, integrated systems. 


Federal judge greenlights securities fraud charges against SolarWinds and its CISO

“The biggest message for CISOs is that they need to make sure that not only must the board and senior management know about all risks, but they need to reflect that in whatever they tell third-parties and investors.” Brian Levine, a former federal prosecutor who today serves as the managing director at Ernst & Young overseeing cybersecurity strategies, agreed, saying “for SolarWinds, this was not a good result. The court found that they engaged in the most serious conduct, which is securities fraud.” But Levine said the bulk of the decision was more bad news for the SEC than it was good news for SolarWinds. “Agencies like the SEC are not used to bringing charges and losing on most of them,” Levine said. “For the court to find so many of the SEC theories were overreaches or incorrect is unusual. It will make some at the SEC think about how aggressive they want to be in using untested theories going forward.” Levine said he saw the ruling delivering a small message to enterprise security leaders: “Smart CISOs may be more careful about what they say in public statements. And also, whether they make public statements about their security at all. You don’t get much credit for making them,” and there is a potential downside.


The Looming Crisis in the Data Observability Market

Enterprises should push for standards and openness from observability. The reason isn’t simply technical. The real problem with closed systems is that they limit value. Today, enterprises express grave concerns about skyrocketing observability costs because they are locked into overpaying for different tools that do the same task in other areas of the organization. In contrast, tools that adhere to OTel are beginning to emerge, and these are better able to collect, export, and analyze telemetry data from any source. With the spread of OTel and the development of a standard observability operating system, enterprises will own the data they generate, with no vendor lock-in at any point along the observability and monitoring path. Today, the reality is that costs are skyrocketing because the network team will use one tool, security relies on something else, and e-commerce prefers yet another. Each team needs observability to optimize performance, but they wouldn’t need to keep overpaying for duplicate tools if they genuinely owned their data. This means that it is vitally essential for observability buyers to insist on open standards and APIs in general and OTel in particular for observability. 


Using Threat Intelligence to Predict Potential Ransomware Attacks

The information gathered by threat intelligence initiatives include details about cyberattack plans, methods, bad actor groups that pose a threat, possible weak spots within the organization’s current security infrastructure and more. By gathering information and conducting data analysis, threat intelligence tools can help organizations identify, understand, and proactively defend against attacks. Threat intelligence can help thwart attacks before they occur and strengthen an organization’s security infrastructure. This means that security analysts can utilize threat intelligence to refine their research and locate the malicious actor who is either planning or executing a ransomware attack. ... Additionally, threat intelligence platforms can utilize machine learning, automated correlation processing, and artificial intelligence to pinpoint specific cyber breach occurrences and map patterns of behavior across instances. For example, analysts can easily recognize the common tactics, techniques, and procedures used by current ransomware attack groups. By identifying common attack methods, organizations can better prepare to disarm the effectiveness of these methods and prevent an attack.


16 Effective Strategies For Measuring Reputation Risk

An early indicator of reputation risk is employee behavior changes and feedback. Measure these via internal surveys and turnover rates. Employees experience the repercussions of external reputation issues firsthand, which can be early indicators of deeper problems. This not only helps detect internal issues that could spill over into public perception, but it also encourages healthy corporate culture. ... Reputation risk comes in many forms, seen and hidden. Being sensitive to customer sentiment, employee feedback, media perception and other stakeholders is important. Using a combination of tools such as media monitoring, social media analytics across multiple platforms and customer and employee surveys can help a company detect negative signals and take corrective action before the risk escalates. ... It's important to define what reputational "risk" really means for your company. The risk could take the form of negative coverage or critical sentiment on social media, but inconspicuousness can present a profound threat, particularly for startups or companies looking to transform a legacy brand. Not all press is good press, but risk aversion to the point of invisibility can be a risk, too.


Safeguard Personal and Corporate Identities with Identity Intelligence

The ways that cybercriminals get their hands on credentials vary. Phishing schemes – deceptive emails designed to trick recipients into divulging their credentials – in one way. Another method that's gaining in popularity is Stealer Malware. Stealers are a category of malware that harvest credentials such as usernames, passwords, cookies, and other data from infected systems. Other tactics include brute force attacks, where threat actors use tools to automatically generate passwords and then try them out one by one to access a user account, and social engineering tactics, in which threat actors manipulate users into giving away sensitive information. According to some estimates, by trying one million random combinations of emails and passwords, attackers can potentially compromise between 10,000 and 30,000 accounts. ... Robust security measures like multi-factor authentication (MFA) and consistent, stringent employee training and enforcement of data protection policies can help make companies less vulnerable to this type of threat. However, missteps happen. And when they do, security teams must be immediately alerted when any compromised access is discovered on dark web marketplaces. This is where identity intelligence comes in.

With manufacturing systems becoming more complex, AI-driven data pattern recognition is crucial for sharpening quality control, predicting equipment issues, and optimizing production for fewer defects, higher Overall Equipment Effectiveness (OEE), and significant cost savings. With Industry 4.0 and the emergence of Industry 5.0, there will be too much data being generated every second for the human mind to cope with — AI will become an indispensable tool for manufacturers ... As roles evolve, workers will need new skills. Providing them with the necessary tools and training to work alongside, and be augmented by, AI will ensure a productive synergy between human ingenuity and machine efficiency. AI greatly enhances the value proposition of connected worker platforms by empowering the worker with capabilities and insights designed to further optimize their performance. ... With AI-powered systems, manufacturers can now optimize their operations and make more informed decisions, leading to reduced waste and improved efficiency. The IFS AI research found respondents think AI can have the biggest impact on sustainability through designing better flow in manufacturing processes to improve efficiency.


A&M: AI in Fintech – A Double-Edged Sword for Cybersecurity

“It is essential that fintechs are abreast of the latest challenges and the solutions that are available to ensure that they are best able to protect both their customers and their business,” he says. “One only has to look at how 'well' deepfakes have developed over the past couple of years to see how things are progressing… never mind the impact GenAI will have on the quality and realism of such attacks.” While cybersecurity aims must remain at the forefront of financial institutions’ thinking, Phil reminds us there is ‘no silver bullet’ solution to solve the issue of fraudsters today. “It is a case of improving awareness, research and knowledge to ensure that practices, procedures and technologies are implemented to improve protection,” he continues. “One of the most commonly overlooked elements of this is training and awareness, as this can be a key control in helping mitigate risk.” ... “The emergence of new fraud typologies (particularly more sophisticated APP fraud) has led to a change in mindset in recent years – FS institutions are now increasingly aware that educational initiatives, especially when tailored to the customer base in question, form a critical component of their preventative fraud controls.”


Khan believes that AI and human intelligence can be combined, dispelling the fear that AI may eventually replace humans as it advances in its ability to perform tasks. "The study examines the challenges in incorporating AI technology in real-world industrial applications and how IA can improve process monitoring, fault detection, and decision-making to improve process safety," Amin said. Khan contends that AI will improve safety by analyzing real-time data, predicting maintenance needs, and automatically detecting faults. However, the IA approach, using human decision-making, is also expected to reduce incident rates, lower operational costs, and increase reliability. "The application of AI in chemical engineering presents significant challenges, which means it is not enough to ensure comprehensive process safety," Sajid said. ... AI risks include data quality issues, overreliance on AI, lack of contextual understanding, model misinterpretation, and training and adaptation challenges. On the other hand, the risks associated with IA include human error in feedback, conflict in AI-HI decision-making, biased judgment, complexity in implementation, and reliability issues.


Energy and the promise of AI

The acquisition of electricity is becoming a limiting factor in running data centers, and hyperscale customers have turned to nuclear power as a way of powering their data centers with zero-carbon generation. ... While there is potential for reducing the power consumption required for AI workloads through new algorithms and approaches, more power-efficient GPUs, and new sources of power, today, direct-to-chip liquid cooling (DLC) offers the most immediate opportunity to reduce PUE and improve power efficiency, with PUE of 1.06 achieved in practice through DLC. In addition, the latest high core-count server CPUs have improved core/watt performance, allowing data center footprint reduction and the associated power savings while achieving the same level of performance as older systems. Many of these systems will also benefit from DLC due to the increased processor TDP needed for these higher core counts. While many data center operators want the latest and fastest CPU and GPU-based systems, there is an opportunity to investigate the right match for the agreed-upon SLAs and the energy required for the servers.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - July 19, 2024

Master IT Compliance: Key Standards and Risks Explained

IT security focuses on protecting an organization’s data and guarding against breaches and cyberattacks. While IT regulatory policies are generally designed to ensure security, making security and compliance closely intertwined, they are not identical. Regulatory policies frequently mandate specific security practices, thus aligning compliance efforts with security goals. For example, regulations might require an organization to have data encryption, access controls, and regulatory security audits. However, being compliant does not automatically guarantee an organization’s security. Compliance mandates often set minimum standards, and organizations may need to implement additional security measures beyond what is required to adequately protect their data. Conversely, some aspects of the compliance process may do nothing to enhance security. ... Creating an IT compliance checklist can greatly simplify the arduous task of maintaining compliance. The checklist ensure critical tasks are consistently performed, tailored to each organization’s industry, specific compliance requirements, and daily operations.


The Dynamic Transformation Of Enterprise Fraud Management Ecosystems

While collaboration and information sharing has become pivotal, financial institutions are also faced with the pressure to consolidate technology and reduce the number of vendors with whom they work. This is evidenced by the growing number of financial institutions investing in cyber fraud fusion centres to create a centralized environment that aligns the data, technology and operational capabilities of traditionally siloed teams. ... Given the complexity of cybercrime and the differences in financial institutions and their unique requirements, EFM strategy requires a layered approach and flexibility in the solutions that support it. A layered defence allows financial institutions to address different aspects and stages of fraud attempts across the digital lifecycle and cross-verify suspicious activities to increase confidence in risk decisions. The importance of behavioural biometrics intelligence within the EFM ecosystem can no longer be ignored given customer adoption and success. Many forward-thinking institutions have implemented the technology to bolster or complement existing EFM systems, detect emerging fraud types and elevate customer safety in digital banking.


Law Enforcement Eyes AI for Investigations and Analysis

For all of its potential benefits, AI is also vulnerable to misuse. Weak oversight, for instance, can lead to biases in predictive policing or errors in evidence analysis. "It's crucial to implement checks and balances to ensure that AI is used ethically and accurately," Rome says. Meanwhile, many law enforcement organizations are reluctant to embrace technology due to budget constraints, a lack of technical expertise, and an overall resistance to change. Concerns about privacy and civil liberties are also hindering adoption. In particular, there's the possibility of AI bias, which can lead to inaccurate conclusions when discriminatory data and algorithms are baked into AI models. ... Despite the challenges, the long-term outlook is promising, Rome says. "As technology advances and law enforcement agencies become more familiar with AI's potential, its adoption is likely to increase," he predicts. Claycomb agrees, but notes that adopters will need to implement workflows that take full advantage of other technology tools, including deploying powerful and connected mobile device fleets.


How Generative AI Has Forever Changed the Software Testing Process

Automation has been a game changer in the software testing process, but there is still one big problem: tests can eventually lose their relevance and accuracy. ... Generative AI, unlike your average automation process, is backed up by a pool of data. To top that up, it’s continuously learning with each command and addition to the database. This means that if the new test case has a slightly different aim, the AI system should pick up on that and make the necessary adjustments. This type of action can still be a hit-or-miss, depending on how well-trained the database is, but with the proper human intelligence assistance, it could take off a lot from the development process. ... When testing models are created manually, they are done with a standard background. The developer had an environment in mind (or several of them), creating a realistic area to test it against. This can bring various limitations, depending on how many data sets you use. However, Generative AI can create diverse models that the human brain could not have even thought about. Indeed, AI can tend to hallucinate when it does not have enough data, but even those scenarios can give you a couple of ideas


Amid Licensing Uncertainty, How Should IaC Management Adapt?

It’s a deliberation that organizations might have comfortably back-burnered, until last summer when Terraform’s continued viability as an IaC industry-standard suddenly came under intense scrutiny when HashiCorp changed its license scheme from a purely open source model to a less-than-open alternative. Since that time, the Linux Foundation-backed OpenTofu initiative appears to have changed the headers of code HashiCorp had previously released under its new Business Source License (BUSL), rereleasing it under the MPL 2.0 license. ... Organizations will want to impose restrictions on developers’ resource usage, Williams foresees. Those restrictions will be based not on capacity — which the IaC engineer understands more readily — but instead upon cost. Presently, enabling the restrictions necessary to maintain compliance and achieve security objectives requires, at the very least, expert guidance. Meanwhile, the influx of talent in platform engineering is weighted towards AI engineers who may not know what these infrastructure resources even are.


Implementing Threat Modeling in a DevOps Workflow

Integrating threat modeling into a DevOps workflow involves embedding security practices throughout the development and operations lifecycle. This approach ensures continuous security assessment and improvement, aligning with the DevOps principles of continuous integration and continuous deployment (CI/CD). ... Automated tools play a crucial role in facilitating continuous threat modeling and security assessments. Tools such as OWASP Threat Dragon, Microsoft Threat Modeling Tool and IriusRisk can automate various aspects of threat modeling, making it easier to integrate these practices into the CI/CD pipeline. Automation helps ensure that threat modeling is performed consistently and efficiently, reducing the burden on development and security teams. ... Effective threat modeling requires close collaboration between development, operations and security teams. This cross-functional approach ensures that security is considered from multiple perspectives and throughout the entire development lifecycle. Collaboration can be fostered through regular meetings, joint workshops and shared documentation.


Want ROI from genAI? Rethink what both terms mean

Early genAI apps often delivered breathtaking results in small pilots, setting expectations that didn’t carry over to larger deployments. “One of the primary culprits of the cost versus value conundrum is lack of scalability,” said KX’s Twomey. He points to an increasing number of startup companies using open-source genAI technology that is “sufficient for introductory deployments, meaning they work nicely with a couple hundred unstructured documents. Once enterprises feel comfortable with this technology and begin to scale it up to hundreds of thousands of documents, the open-source system bloats and spikes running costs,” he said. ... Even when genAI succeeds, its results are sometimes less valuable than anticipated. For example, generative AI is a very effective tool for creating information that is generally handled by lower-level staffers or contractors, where it is simply tweaking existing material for use in social media or e-commerce product descriptions. It still needs to be verified by humans, but it has the potential for cutting costs in creating low-level content. But because it often is low level, some have questioned whether that is really going to deliver any meaningful financial advantages.


How AI Will Fuel the Future of Observability

A unified observability platform makes use of AI via AIOps, which applies AI and machine learning (ML) models to collect data from throughout the enterprise – from logs and alerts to applications, containers, and clouds. It performs tasks ranging from root cause analysis and incident prevention to advanced correlation. And although AI has already proved valuable, its impact is about to become considerably more pronounced, fueling observability in the near- and long-term future. ... Via constant monitoring, an AI could ingest incoming data and detect an anomaly or some other activity that exceeds preset thresholds. It could then perform a series of actions, similar to what happens with remediation scripts, to resolve the problem. Just as importantly, if the AI model doesn’t resolve the problem, it would automatically open a ticket with the platform used for managing issues. ... AI and ML models need data to work well. And part of assessing your environment is identifying the visibility gaps in your organization. A unified observability platform can provide visibility into the entire enterprise and how everything within it is connected.


Fearing disruption? A skills-based talent strategy builds business resiliency

“It’s important for IT leaders to understand that being proactive in developing the skills of their tech workforce is crucial to helping future-proof their operations against technological disruption. Those who invest in the right skills — and help their workforce gain new skills — are likely to remain ahead of the wave of digital transformation,” says Ryan Sutton, a technology hiring and consulting expert at Robert Half. Developing the skills necessary to support transformation initiatives builds business resiliency. By anticipating future skills needs, IT leaders can ensure their organizations have the right training programs in place to upskill workers as necessary, Sutton says. ... “The best way for IT leaders to know which skills gap would be a threat is by establishing a strategic workforce plan connected to changing business demands. Some organizations are getting better at building databases that track employee skills in real-time as opposed to relying on job descriptions, which may not always be accurate or updated. It’s time to understand what skills exist on your team to help identify gaps,” says Jose Ramirez, director analyst at Gartner.


Data centre trends: Is it possible to digitalise and decarbonise?

It can be difficult to balance the push for digitalisation and tech progress with the need for sustainability with the climate crisis bearing down. Add in developing regulation, cybersecurity and the need to upgrade infrastructure and there are a lot of factors for IT teams to consider right now. ... Lantry also argues that digitalisation can be a pathway to sustainability, rather than a barrier to it, as businesses “adopting digital-first strategies” can help achieve their environmental, social and governance (ESG) objectives. “By integrating these practices, IT leaders can ensure that their digital transformation initiatives align with their sustainability goals,” he said. When Google revealed its significant rise in emissions earlier this year, it described its own climate neutral goals as “extremely ambitious” and said that it “won’t be easy”. But the tech giant also claimed that technology like AI can play a “critical enabling role” in helping the world to reach a “low-carbon future” by aiding in various environmental tasks. Lantry had a similar view when it comes to the potential benefits of the broader data centre sector.



Quote for the day:

"Education is the ability to listen to almost anything without losing your temper or your self-confidence." -- Robert Frost