Daily Tech Digest - July 23, 2024

Transforming GRC Landscape with Generative AI

Streamlining GRC workflows and integrating various components of the technology stack can significantly enhance efficiency. Apache Airflow is an open-source workflow automation tool that orchestrates complex data pipelines and automates GRC processes, leading to substantial efficiency gains. Apache Camel facilitates integration between different system components, ensuring smooth data flow across the technology stack. Additionally, robotic process automation (RPA) can be implemented using open-source platforms like Robot Framework. These platforms automate repetitive tasks within GRC processes, further enhancing operational efficiency and allowing human resources to focus on more strategic activities. By leveraging these open-source tools and techniques, organizations can build a robust infrastructure to support GenAI and RAG in their GRC processes, achieving enhanced efficiency, accuracy, and strategic insights. ... Traditional approaches are labour-intensive and prone to human error, leading to inefficiencies and increased compliance risks. By contrast, GenAI and RAG can streamline processes, reduce the burden on human resources, and provide timely and accurate information for strategic planning. 


Two AI Transparency Concerns that Governments Should Align On

AI raises two fundamental transparency concerns that have gained in salience with the spread of generative AI. First, the interaction with AI systems increasingly resembles human interaction. AI is gradually developing the capability of mimicking human output, as evidenced by the flurry of AI-generated content that bears similarities to human-generated content. The “resemblance concern” is thus that humans are left guessing: Is an AI system in use? Second, AI systems are inherently opaque. Humans who interact with AI systems are often in the dark about the factors and processes underlying AI outcomes. The “opacity concern” is thus that humans are left wondering: How does the AI system work? ... Regulatory divergence presents a unique opportunity for governments to learn from each other. Governments can draw from the expertise accumulated by national regulators and other governments that are experimenting to find effective AI rules. For example, governments looking to establish information rights can learn from Brazil’s precise elaboration of information to be disclosed, South Korea’s detailed procedure for requesting information, and the EU’s unique exception mechanisms.


5 IT risks CIOs should be paranoid about

CIOs sitting on mounting technical debt must turn paranoia into action plans that communicate today’s problems and tomorrow’s risks. One approach is to define and seek agreement of non-negotiables with the board and executive committee, outlining criteria of when upgrading legacy systems must be prioritized above other business objectives. ... CIOs should be drivers of change — which can create stress — while taking proactive and ongoing steps to reduce stress in their organization and across the company. The risks of burnout mount because of higher business expectations of delivering new technology capabilities, leading change management activities, and ensuring systems are operational. CIOs should promote ways to disconnect and reduce stress, such as improving communications, simplifying operations, and setting realistic objectives. ... “When considering the growing number of global third parties organizations need to collaborate with, protecting the perimeter with traditional security methods becomes ineffective the moment the data leaves the enterprise,” says Vishal Gupta, CEO & co-founder of Seclore.


Understanding the difference between competing AI architectures

A common misconception is that AI infrastructure can just be built to the NVIDIA DGX reference architecture. But that is the easy bit and is the minimum viable baseline. How far organizations go beyond that is the differentiator. AI cloud providers are building highly differentiated solutions through the application of management and storage networks that can dramatically accelerate the productivity of AI computing. ... Another important difference to note with regards AI architecture versus traditional storage models is the absence of a requirement to cache data. Everything is done by direct request. The GPUs talk directly to the disks across the network, they don't go through the CPUs or the TCP IP stack. The GPUs are directly connected to the network fabric. They bypass most of the network layers and go directly to the storage. It removes network lag. ... Ultimately, organisations should partner with a provider they can rely on. A partner that can offer guidance, provide engineering and support. Businesses using cloud infrastructure are doing so to concentrate on their own core differentiators. 


How Much Data Is Too Much for Organizations to Derive Value?

“If data is in multiple places, that is increasing your cost,” points out Chris Pierson, founder and CEO of cybersecurity company BlackCloak. Enterprises must also consider the cost of maintenance, which could include engineering and program analyst time. Beyond storage and maintenance costs, data also comes with the potential cost of risk. Threat actors constantly look for ways to access and leverage the data safeguarded by enterprises. If they are successful, and many are, enterprises face a cascade of potential costs. ... Once an enterprise is able to wrap its arms around data governance, leaders can start to ask questions about what kind of data can be deleted and when. The simple answer to the question of how much is too much boils down to value versus risk. “Start with the fundamental question: What does the company get from the data? Does it cost more to store and protect that data than the data actually provides to the organization?” says Wall. When it comes to retention, consider why data is being collected and how long it is needed. “If you don't need the data, don't collect it. That should always be the first fundamental rule,” says Pierson.


Empowering Developers in Code Security

When your team is ready to add security earlier in the development process, we suggest introducing 'guardrails' into their workflow. Guardrails, unlike wholly new processes, can slide into place unobtrusively, providing warnings about potential security issues only when they are actionable and true positives. Ideally, you want to minimize friction and enable developers to deliver safer, better code that will pass tests down the line. One tool that is almost universal across development and DevOps teams is Git. With over 97% of developers using Git daily, it is a familiar platform that can be leveraged to enhance security. Built directly into Git is an automation platform called Git Hooks, which can trigger just-in-time scanning at specific stages of the Git workflow, such as right before a commit is made. By catching issues before making a commit and providing direct feedback on how to fix them, developers can address security concerns with minimal disruption. This approach is much less expensive and time-consuming than addressing issues later in the development process. This can actually increase the time spent on new code by reducing the amount of maintenance that eventually needs to be done.


Retrieval-augmented generation refined and reinforced

RAG strengthens the application of generative AI across business segments and use cases throughout the enterprise, for example code generation, customer service, product documentation, engineering support, and internal knowledge management. ... The journey to industrializing RAG solutions presents several significant challenges along the RAG pipeline. These need to be tackled for them to be effectively deployed in real-world scenarios. Basically, a RAG pipeline consists of four standard stages — pre-retrieval, retrieval, augmentation and generation, and evaluation. Each of these stages presents certain challenges that require specific design decisions, components, and configurations. At the outset, determining the optimal chunking size and strategy proves to be a nontrivial task, particularly when faced with the cold-start problem, where no initial evaluation data set is available to guide these decisions. A foundational requirement for RAG to function effectively is the quality of document embeddings. Guaranteeing the robustness of these embeddings from inception is critical, yet it poses a substantial obstacle, just like the detection and mitigation of noise and inconsistencies within the source documents. 


Confidential AI: Enabling secure processing of sensitive data

Confidential AI is the application of confidential computing technology to AI use cases. It is designed to help protect the security and privacy of the AI model and associated data. Confidential AI utilizes confidential computing principles and technologies to help protect data used to train LLMs, the output generated by these models and the proprietary models themselves while in use. Through vigorous isolation, encryption and attestation, confidential AI prevents malicious actors from accessing and exposing data, both inside and outside the chain of execution. ... Confidential AI can also enable new or better services across a range of use cases, even those that require activation of sensitive or regulated data that may give developers pause because of the risk of a breach or compliance violation. This could be personally identifiable user information (PII), business proprietary data, confidential third-party data or a multi-company collaborative analysis. This enables organizations to more confidently put sensitive data to work, as well as strengthen protection of their AI models from tampering or theft.


Women in IT Security Lack Opportunities, Not Talent

Female leaders are also instrumental in advocating for policies and practices that promote diversity and inclusion, such as equitable hiring practices, sponsorship programs, and family-friendly policies. "By actively working to create a more inclusive environment, female cyber leaders can help pave the way for future generations of women in cybersecurity," Dohm said. ... Guenther noted that women often encounter unconscious biases that affect decisions regarding leadership potential and technical capabilities, particularly as it relates to perception bias. "Women in cybersecurity, as in many other fields, often face double standards in how their actions and words are perceived compared to their male counterparts," she said. For example, assertiveness, decisiveness, and direct communication – qualities praised in male leaders – can be unfairly labeled as aggressive or overly emotional when exhibited by women. This disparity in perception can hinder women from being seen as potential leaders or being evaluated fairly. "Addressing these biases is crucial for creating a truly equitable workplace where everyone is judged by the same standards and behaviors are interpreted consistently, regardless of gender," Guenther said.


Early IT takeaways from the CrowdStrike outage

Recovering from CrowdStrike has been an all-hands-on-deck event. In some instances, companies have needed humans to be able to touch and reboot impacted machines in order to recover — an arduous process, especially at scale. If you have outsourced IT operations to managed service providers, consider that those MSPs may not have enough staff on hand to mitigate your issues along with those of their other clients, especially when a singular event has widespread fallout. ... Ensure you review recovery steps and processes on a regular basis to guarantee that your team knows exactly where those recovery keys are and what processes are necessary to obtain them. While Bitlocker is often mandated for compliance reasons, it also adds a layer of complications you may not be prepared for. ... It was also quickly identified what the underlying culprit was, a CrowdStrike update that went faulty. In other incident situations, you may not be so quickly informed. It may not be clear what has happened and what assets have been impacted. Often, you’ll need to reach out to staff who are closely working with impacted assets to determine what is going on and what actions to take. 



Quote for the day:

"Effective questioning brings insight, which fuels curiosity, which cultivates wisdom." -- Chip Bell

Daily Tech Digest - July 22, 2024

AI regulation in peril: Navigating uncertain times

Existing laws are often vague in many fields, including those related to the environment and technology, leaving interpretation and regulation to the agencies. This vagueness in legislation is often intentional, for both political and practical reasons. Now, however, any regulatory decision by a federal agency based on those laws can be more easily challenged in court, and federal judges have more power to decide what a law means. This shift could have significant consequences for AI regulation. Proponents argue that it ensures a more consistent interpretation of laws, free from potential agency overreach. However, the danger of this ruling is that in a fast-moving field like AI, agencies often have more expertise than the courts. ... The judicial branch has no such existing expertise. Nevertheless, the majority opinion said that “…agencies have no special competence in resolving statutory ambiguities. Courts do.” ... Going forward, then, when passing a new law affecting the development or use of AI, if Congress wished for federal agencies to lead on regulation, they would need to state this explicitly within the legislation. Otherwise, that authority would reside with the federal courts. 


Fostering Digital Trust in India's Digital Transformation journey

In this era where digital interactions dominate, trust is the anchor for building resilient organizations and stronger relationships with stakeholders and customers. As per ISACA’s State of Digital Trust 2023 research, 90 percent of respondents in India say digital trust is important and 89 percent believe its importance will increase in the next five years. Nowhere is this truer than in India, the world’s largest digitally connected democracy and a burgeoning hub of digital innovation and transformation. ... A key hurdle in building and maintaining digital trust in most countries is the absence of a standardized conceptual framework for measurement and access to reliable internet infrastructure and digital literacy. In India’s case, with a rapidly expanding digital footprint comes an equally high threat of issues such as lack of funding, unavailability of technological resources, shortage of skilled workforce, lack of alignment between digital trust and enterprise goals, inadequate governance mechanisms, the spread of misinformation through social media, etc. leading to financial fraud and data theft. 


Tech debt: the hidden cost of innovation

While tech debt may seem like an unavoidable cost for any business heavily investing in innovation, delving deeper into its causes can reveal issues that may derail operations entirely. Many organisations struggle to find a solution, as the time required for risk analysis can seem unfeasible. Yet, by recognising early signs, businesses can leverage the right tools and find the right partners to facilitate a low-risk and controlled modernisation of legacy systems. Any IT modernisation program requires a strategic, evidence-based approach, starting with a rigorous fact-finding process to identify opportunities and inefficiencies within legacy systems. ... Making a case for modernisation requires articulating the expected benefits, costs and challenges beforehand. This begins with a comprehensive analysis that identifies existing system functionality and data against business and technical requirements, highlighting any gaps or challenges. ... In extreme situations, it may be necessary to replace an entire system. This is always the last resort due to the large investment needed and the disruption it can cause. 


Fake Websites, Phishing Surface in Wake CrowdStrike Outage

These fake sites often promise quick fixes or falsely offer cryptocurrency rewards to lure visitors into accessing malicious content. George Kurtz, CEO of CrowdStrike, emphasized the importance of using official communication channels, urging customers to be wary of imposters. "Our team is fully mobilized to secure and stabilize our customers' systems," Kurtz said, noting the significant increase in phishing emails and phone calls impersonating CrowdStrike support staff. Imposters have also posed as independent researchers selling fake recovery solutions, further complicating efforts to resolve the outage. Rachel Tobac, founder of SocialProof Security, warned about social engineering threats in a series of tweets on X, formerly Twitter. "Criminals are exploiting the outage as cover to trick victims into handing over passwords and other sensitive codes," Tobac warned. She advised users to verify the identity of anyone requesting sensitive information. The surge in cybercriminal activity in the wake of the outage follows a common tactic used by cybercriminals to exploit chaotic situations.


Under-Resourced Maintainers Pose Risk to Africa's Open Source Push

To shore up security and avoid the dangers of under-resourced projects, companies have a few options, all starting with determining which OSS their developers and operations rely on. To that end, software bills of materials (SBOMs) and software composition analysis (SCA) software can help enumerate what's in the environment, and potentially help trim down the number of packages that companies need to check, verify, and manage, says Chris Hughes, chief security adviser for software supply chain security firm Endor Labs. "There's simply so much software, so many projects, so many libraries, that the idea of ... monitoring them all actively is just — it's very hard," he says. Finally, educating developers and package managers on how to produce and manage code securely is another area that can produce significant gains. The OpenSSF, for example, has created a free course LFD 121 as part of that effort. "We'll be building a course on security architectures, which will also be released later this year," OpenSSF's Arasaratnam says. "As well as a course on security for not just engineers, but engineering managers, as we believe that's a critical part of the equation."


Cross-industry standards for data provenance in AI

Knowing the source and history of datasets can help organizations better assess their reliability and suitability for training or fine-tuning AI models. This is crucial because the quality of training data directly affects the performance and accuracy of AI models. Understanding the characteristics and limitations of the training data also allows for a better assessment of model performance and potential failure modes. ... As AI regulations such as the EU AI Act evolve, data provenance becomes increasingly important for demonstrating compliance. It allows organizations to show that they use data appropriately and align with relevant laws and regulations. ... Organizations should start by reviewing the standards documentation, including the Executive Overview, use case scenarios, and technical specifications (available in GitHub). Launching a proof of concept (PoC) with a data provider is recommended to build internal confidence. Organizations lacking resources or deploying a PoC “light” may opt to use our metadata generator tool to create and access standardized metadata files


Why an Agile Culture Is Critical for Enterprise Innovation

In the end, embracing agility isn’t just about staying afloat in the turbulent waters of AI innovation; it’s about turning those waves into opportunities for growth and transformation. Because in this ever-evolving landscape, the businesses that thrive will be the ones that are flexible, responsive, and always ready to adapt to whatever comes next. Which brings me to my next point – you need to start loving failure. This requires a whole reframe because in the world of AI, getting things wrong can actually be the fastest way to get things right. Most companies are so scared of getting it wrong that they never try anything new and are frozen like a deer in headlights. In AI, that’s a death sentence. ... Be prepared for resistance. Change is scary, and you’ll always have a few “blockers” who are negative in their approach. These are the people you need to win over the most. In the meantime, you just need to weather the storm. Lastly, remember that becoming agile is a journey, not a destination. It’s about creating a mindset of continuous improvement. Always in beta? That’s absolutely fine and in the fast-paced world of AI, that’s exactly where you want to be.


The Rise of Cybersecurity Data Lakes: Shielding the Future of Data

Beyond real-time threat detection and analysis, cybersecurity data lakes offer organizations a powerful platform for vulnerability prediction and risk assessment. By examining past incidents, organizations can uncover trends and commonalities in security breaches, weak points in their defenses, and recurring threats. Cybersecurity data lakes store vast amounts of data spanning extended periods, which is a rich source of information for identifying recurring vulnerabilities or attack vectors. With techniques such as time-series analysis and pattern recognition, organizations can uncover historical vulnerability patterns through rigorous testing and use this knowledge to anticipate and mitigate future risks. In fact, this is one of the reasons why the global pentesting market is expected to rise to a value of $5 billion by 2031, with more innovative approaches like blackbox pentesting to exploit hidden attack vectors and using AI for vulnerability assessment (VAS) to improve efficiency. When combined with other vulnerability assessment methods like threat modeling and red team exercises, predictive modeling can also help organizations identify potential attack paths and attack surface areas and proactively implement defensive measures.


Internships can be a gold mine for cybersecurity hiring

Though an internship can pay off for an employer in the form of a fresh crop of talent to hire, it requires the company to invest time, planning, oversight, and resources. Designating one or more people to manage the process internally can make things easier for the organization. “Sit down with the supervisory personnel so they understand what that position is being advertised for, what the expected outcomes are and how to manage that intern, the program needs, and how they have to report [on that intern],” ... If possible, Smith recommends mentoring an intern, not simply ticking off a bureaucratic checklist of their tasks: “I do fervently believe you essentially need a sponsor, someone who’s going to take the intern under his or her wing and nurture that relationship, nurture that person.” Chiasson warns employers to manage their own expectations as carefully as they manage the interns themselves. Rather than expecting a unicorn to show up — an intern with one or more degrees, several technical certifications and other prior workplace experience — she urges companies to “take them on and then train them based on what you require.”


Desirable Data: How To Fall Back In Love With Data Quality

With so much data being pumped out at breakneck rates, it can seem like an insurmountable challenge to ensure data accuracy, completeness, and consistency. And despite technological, governance and team efforts, poor data can still endure. As such, maintaining data quality can feel like a perennial challenge. But quality data is fundamental to a company’s digital success. In order to create a business case for embracing data quality, you have to, firstly, demonstrate the far-reaching consequences of poor data quality on organisational performance. If you can present the problem from a business standpoint — backed by evidence and real-world scenarios of data quality issues leading to incurred costs, reputational risk, and uncapitalised opportunities — you can implement proactive measures and trigger a desire by top-level management to adapt processes. To bring your case to life, you then have to find ways of quantifying the business impact of data quality issues. This could take the form of illustrating the effect of bad data on a marketing campaign, showing the difference with and without data quality in relation to usable records, sales leads, and how this impacts your revenue.



Quote for the day:

"Defeat is not bitter unless you swallow it." -- Joe Clark

Daily Tech Digest - July 20, 2024

CrowdStrike’s IT outage makes it clear why cyber resilience matters

“This was not a code update. This was actually an update to content. And what that means is there’s a single file that drives some additional logic on how we look for bad actors. And this logic was pushed out and caused an issue only in the Microsoft environment,” CrowdStrike CEO and founder George Kurtz told Jim Cramer during an interview on CNBC earlier today. Trustwave CISO Kory Daniels recently said that “boards have begun asking the question: Is it important to have a formally titled chief resilience officer?” VentureBeat has learned that more boards of directors are adding cyber resilience to their broader risk management project teams. High-profile ransomware attacks that create chaos across supply chains are among the most costly for any business to withstand, as the United Healthcare breach makes clear. Outages caused by misconfigurations highlight the need for a unique form of cyber resilience so actively pursued that it becomes a core part of a company’s DNA. Misconfigured updates will continue to cause global outages. That goes with the territory of an always-on, real-time world defined by intricate, integrated systems. 


Federal judge greenlights securities fraud charges against SolarWinds and its CISO

“The biggest message for CISOs is that they need to make sure that not only must the board and senior management know about all risks, but they need to reflect that in whatever they tell third-parties and investors.” Brian Levine, a former federal prosecutor who today serves as the managing director at Ernst & Young overseeing cybersecurity strategies, agreed, saying “for SolarWinds, this was not a good result. The court found that they engaged in the most serious conduct, which is securities fraud.” But Levine said the bulk of the decision was more bad news for the SEC than it was good news for SolarWinds. “Agencies like the SEC are not used to bringing charges and losing on most of them,” Levine said. “For the court to find so many of the SEC theories were overreaches or incorrect is unusual. It will make some at the SEC think about how aggressive they want to be in using untested theories going forward.” Levine said he saw the ruling delivering a small message to enterprise security leaders: “Smart CISOs may be more careful about what they say in public statements. And also, whether they make public statements about their security at all. You don’t get much credit for making them,” and there is a potential downside.


The Looming Crisis in the Data Observability Market

Enterprises should push for standards and openness from observability. The reason isn’t simply technical. The real problem with closed systems is that they limit value. Today, enterprises express grave concerns about skyrocketing observability costs because they are locked into overpaying for different tools that do the same task in other areas of the organization. In contrast, tools that adhere to OTel are beginning to emerge, and these are better able to collect, export, and analyze telemetry data from any source. With the spread of OTel and the development of a standard observability operating system, enterprises will own the data they generate, with no vendor lock-in at any point along the observability and monitoring path. Today, the reality is that costs are skyrocketing because the network team will use one tool, security relies on something else, and e-commerce prefers yet another. Each team needs observability to optimize performance, but they wouldn’t need to keep overpaying for duplicate tools if they genuinely owned their data. This means that it is vitally essential for observability buyers to insist on open standards and APIs in general and OTel in particular for observability. 


Using Threat Intelligence to Predict Potential Ransomware Attacks

The information gathered by threat intelligence initiatives include details about cyberattack plans, methods, bad actor groups that pose a threat, possible weak spots within the organization’s current security infrastructure and more. By gathering information and conducting data analysis, threat intelligence tools can help organizations identify, understand, and proactively defend against attacks. Threat intelligence can help thwart attacks before they occur and strengthen an organization’s security infrastructure. This means that security analysts can utilize threat intelligence to refine their research and locate the malicious actor who is either planning or executing a ransomware attack. ... Additionally, threat intelligence platforms can utilize machine learning, automated correlation processing, and artificial intelligence to pinpoint specific cyber breach occurrences and map patterns of behavior across instances. For example, analysts can easily recognize the common tactics, techniques, and procedures used by current ransomware attack groups. By identifying common attack methods, organizations can better prepare to disarm the effectiveness of these methods and prevent an attack.


16 Effective Strategies For Measuring Reputation Risk

An early indicator of reputation risk is employee behavior changes and feedback. Measure these via internal surveys and turnover rates. Employees experience the repercussions of external reputation issues firsthand, which can be early indicators of deeper problems. This not only helps detect internal issues that could spill over into public perception, but it also encourages healthy corporate culture. ... Reputation risk comes in many forms, seen and hidden. Being sensitive to customer sentiment, employee feedback, media perception and other stakeholders is important. Using a combination of tools such as media monitoring, social media analytics across multiple platforms and customer and employee surveys can help a company detect negative signals and take corrective action before the risk escalates. ... It's important to define what reputational "risk" really means for your company. The risk could take the form of negative coverage or critical sentiment on social media, but inconspicuousness can present a profound threat, particularly for startups or companies looking to transform a legacy brand. Not all press is good press, but risk aversion to the point of invisibility can be a risk, too.


Safeguard Personal and Corporate Identities with Identity Intelligence

The ways that cybercriminals get their hands on credentials vary. Phishing schemes – deceptive emails designed to trick recipients into divulging their credentials – in one way. Another method that's gaining in popularity is Stealer Malware. Stealers are a category of malware that harvest credentials such as usernames, passwords, cookies, and other data from infected systems. Other tactics include brute force attacks, where threat actors use tools to automatically generate passwords and then try them out one by one to access a user account, and social engineering tactics, in which threat actors manipulate users into giving away sensitive information. According to some estimates, by trying one million random combinations of emails and passwords, attackers can potentially compromise between 10,000 and 30,000 accounts. ... Robust security measures like multi-factor authentication (MFA) and consistent, stringent employee training and enforcement of data protection policies can help make companies less vulnerable to this type of threat. However, missteps happen. And when they do, security teams must be immediately alerted when any compromised access is discovered on dark web marketplaces. This is where identity intelligence comes in.

With manufacturing systems becoming more complex, AI-driven data pattern recognition is crucial for sharpening quality control, predicting equipment issues, and optimizing production for fewer defects, higher Overall Equipment Effectiveness (OEE), and significant cost savings. With Industry 4.0 and the emergence of Industry 5.0, there will be too much data being generated every second for the human mind to cope with — AI will become an indispensable tool for manufacturers ... As roles evolve, workers will need new skills. Providing them with the necessary tools and training to work alongside, and be augmented by, AI will ensure a productive synergy between human ingenuity and machine efficiency. AI greatly enhances the value proposition of connected worker platforms by empowering the worker with capabilities and insights designed to further optimize their performance. ... With AI-powered systems, manufacturers can now optimize their operations and make more informed decisions, leading to reduced waste and improved efficiency. The IFS AI research found respondents think AI can have the biggest impact on sustainability through designing better flow in manufacturing processes to improve efficiency.


A&M: AI in Fintech – A Double-Edged Sword for Cybersecurity

“It is essential that fintechs are abreast of the latest challenges and the solutions that are available to ensure that they are best able to protect both their customers and their business,” he says. “One only has to look at how 'well' deepfakes have developed over the past couple of years to see how things are progressing… never mind the impact GenAI will have on the quality and realism of such attacks.” While cybersecurity aims must remain at the forefront of financial institutions’ thinking, Phil reminds us there is ‘no silver bullet’ solution to solve the issue of fraudsters today. “It is a case of improving awareness, research and knowledge to ensure that practices, procedures and technologies are implemented to improve protection,” he continues. “One of the most commonly overlooked elements of this is training and awareness, as this can be a key control in helping mitigate risk.” ... “The emergence of new fraud typologies (particularly more sophisticated APP fraud) has led to a change in mindset in recent years – FS institutions are now increasingly aware that educational initiatives, especially when tailored to the customer base in question, form a critical component of their preventative fraud controls.”


Khan believes that AI and human intelligence can be combined, dispelling the fear that AI may eventually replace humans as it advances in its ability to perform tasks. "The study examines the challenges in incorporating AI technology in real-world industrial applications and how IA can improve process monitoring, fault detection, and decision-making to improve process safety," Amin said. Khan contends that AI will improve safety by analyzing real-time data, predicting maintenance needs, and automatically detecting faults. However, the IA approach, using human decision-making, is also expected to reduce incident rates, lower operational costs, and increase reliability. "The application of AI in chemical engineering presents significant challenges, which means it is not enough to ensure comprehensive process safety," Sajid said. ... AI risks include data quality issues, overreliance on AI, lack of contextual understanding, model misinterpretation, and training and adaptation challenges. On the other hand, the risks associated with IA include human error in feedback, conflict in AI-HI decision-making, biased judgment, complexity in implementation, and reliability issues.


Energy and the promise of AI

The acquisition of electricity is becoming a limiting factor in running data centers, and hyperscale customers have turned to nuclear power as a way of powering their data centers with zero-carbon generation. ... While there is potential for reducing the power consumption required for AI workloads through new algorithms and approaches, more power-efficient GPUs, and new sources of power, today, direct-to-chip liquid cooling (DLC) offers the most immediate opportunity to reduce PUE and improve power efficiency, with PUE of 1.06 achieved in practice through DLC. In addition, the latest high core-count server CPUs have improved core/watt performance, allowing data center footprint reduction and the associated power savings while achieving the same level of performance as older systems. Many of these systems will also benefit from DLC due to the increased processor TDP needed for these higher core counts. While many data center operators want the latest and fastest CPU and GPU-based systems, there is an opportunity to investigate the right match for the agreed-upon SLAs and the energy required for the servers.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - July 19, 2024

Master IT Compliance: Key Standards and Risks Explained

IT security focuses on protecting an organization’s data and guarding against breaches and cyberattacks. While IT regulatory policies are generally designed to ensure security, making security and compliance closely intertwined, they are not identical. Regulatory policies frequently mandate specific security practices, thus aligning compliance efforts with security goals. For example, regulations might require an organization to have data encryption, access controls, and regulatory security audits. However, being compliant does not automatically guarantee an organization’s security. Compliance mandates often set minimum standards, and organizations may need to implement additional security measures beyond what is required to adequately protect their data. Conversely, some aspects of the compliance process may do nothing to enhance security. ... Creating an IT compliance checklist can greatly simplify the arduous task of maintaining compliance. The checklist ensure critical tasks are consistently performed, tailored to each organization’s industry, specific compliance requirements, and daily operations.


The Dynamic Transformation Of Enterprise Fraud Management Ecosystems

While collaboration and information sharing has become pivotal, financial institutions are also faced with the pressure to consolidate technology and reduce the number of vendors with whom they work. This is evidenced by the growing number of financial institutions investing in cyber fraud fusion centres to create a centralized environment that aligns the data, technology and operational capabilities of traditionally siloed teams. ... Given the complexity of cybercrime and the differences in financial institutions and their unique requirements, EFM strategy requires a layered approach and flexibility in the solutions that support it. A layered defence allows financial institutions to address different aspects and stages of fraud attempts across the digital lifecycle and cross-verify suspicious activities to increase confidence in risk decisions. The importance of behavioural biometrics intelligence within the EFM ecosystem can no longer be ignored given customer adoption and success. Many forward-thinking institutions have implemented the technology to bolster or complement existing EFM systems, detect emerging fraud types and elevate customer safety in digital banking.


Law Enforcement Eyes AI for Investigations and Analysis

For all of its potential benefits, AI is also vulnerable to misuse. Weak oversight, for instance, can lead to biases in predictive policing or errors in evidence analysis. "It's crucial to implement checks and balances to ensure that AI is used ethically and accurately," Rome says. Meanwhile, many law enforcement organizations are reluctant to embrace technology due to budget constraints, a lack of technical expertise, and an overall resistance to change. Concerns about privacy and civil liberties are also hindering adoption. In particular, there's the possibility of AI bias, which can lead to inaccurate conclusions when discriminatory data and algorithms are baked into AI models. ... Despite the challenges, the long-term outlook is promising, Rome says. "As technology advances and law enforcement agencies become more familiar with AI's potential, its adoption is likely to increase," he predicts. Claycomb agrees, but notes that adopters will need to implement workflows that take full advantage of other technology tools, including deploying powerful and connected mobile device fleets.


How Generative AI Has Forever Changed the Software Testing Process

Automation has been a game changer in the software testing process, but there is still one big problem: tests can eventually lose their relevance and accuracy. ... Generative AI, unlike your average automation process, is backed up by a pool of data. To top that up, it’s continuously learning with each command and addition to the database. This means that if the new test case has a slightly different aim, the AI system should pick up on that and make the necessary adjustments. This type of action can still be a hit-or-miss, depending on how well-trained the database is, but with the proper human intelligence assistance, it could take off a lot from the development process. ... When testing models are created manually, they are done with a standard background. The developer had an environment in mind (or several of them), creating a realistic area to test it against. This can bring various limitations, depending on how many data sets you use. However, Generative AI can create diverse models that the human brain could not have even thought about. Indeed, AI can tend to hallucinate when it does not have enough data, but even those scenarios can give you a couple of ideas


Amid Licensing Uncertainty, How Should IaC Management Adapt?

It’s a deliberation that organizations might have comfortably back-burnered, until last summer when Terraform’s continued viability as an IaC industry-standard suddenly came under intense scrutiny when HashiCorp changed its license scheme from a purely open source model to a less-than-open alternative. Since that time, the Linux Foundation-backed OpenTofu initiative appears to have changed the headers of code HashiCorp had previously released under its new Business Source License (BUSL), rereleasing it under the MPL 2.0 license. ... Organizations will want to impose restrictions on developers’ resource usage, Williams foresees. Those restrictions will be based not on capacity — which the IaC engineer understands more readily — but instead upon cost. Presently, enabling the restrictions necessary to maintain compliance and achieve security objectives requires, at the very least, expert guidance. Meanwhile, the influx of talent in platform engineering is weighted towards AI engineers who may not know what these infrastructure resources even are.


Implementing Threat Modeling in a DevOps Workflow

Integrating threat modeling into a DevOps workflow involves embedding security practices throughout the development and operations lifecycle. This approach ensures continuous security assessment and improvement, aligning with the DevOps principles of continuous integration and continuous deployment (CI/CD). ... Automated tools play a crucial role in facilitating continuous threat modeling and security assessments. Tools such as OWASP Threat Dragon, Microsoft Threat Modeling Tool and IriusRisk can automate various aspects of threat modeling, making it easier to integrate these practices into the CI/CD pipeline. Automation helps ensure that threat modeling is performed consistently and efficiently, reducing the burden on development and security teams. ... Effective threat modeling requires close collaboration between development, operations and security teams. This cross-functional approach ensures that security is considered from multiple perspectives and throughout the entire development lifecycle. Collaboration can be fostered through regular meetings, joint workshops and shared documentation.


Want ROI from genAI? Rethink what both terms mean

Early genAI apps often delivered breathtaking results in small pilots, setting expectations that didn’t carry over to larger deployments. “One of the primary culprits of the cost versus value conundrum is lack of scalability,” said KX’s Twomey. He points to an increasing number of startup companies using open-source genAI technology that is “sufficient for introductory deployments, meaning they work nicely with a couple hundred unstructured documents. Once enterprises feel comfortable with this technology and begin to scale it up to hundreds of thousands of documents, the open-source system bloats and spikes running costs,” he said. ... Even when genAI succeeds, its results are sometimes less valuable than anticipated. For example, generative AI is a very effective tool for creating information that is generally handled by lower-level staffers or contractors, where it is simply tweaking existing material for use in social media or e-commerce product descriptions. It still needs to be verified by humans, but it has the potential for cutting costs in creating low-level content. But because it often is low level, some have questioned whether that is really going to deliver any meaningful financial advantages.


How AI Will Fuel the Future of Observability

A unified observability platform makes use of AI via AIOps, which applies AI and machine learning (ML) models to collect data from throughout the enterprise – from logs and alerts to applications, containers, and clouds. It performs tasks ranging from root cause analysis and incident prevention to advanced correlation. And although AI has already proved valuable, its impact is about to become considerably more pronounced, fueling observability in the near- and long-term future. ... Via constant monitoring, an AI could ingest incoming data and detect an anomaly or some other activity that exceeds preset thresholds. It could then perform a series of actions, similar to what happens with remediation scripts, to resolve the problem. Just as importantly, if the AI model doesn’t resolve the problem, it would automatically open a ticket with the platform used for managing issues. ... AI and ML models need data to work well. And part of assessing your environment is identifying the visibility gaps in your organization. A unified observability platform can provide visibility into the entire enterprise and how everything within it is connected.


Fearing disruption? A skills-based talent strategy builds business resiliency

“It’s important for IT leaders to understand that being proactive in developing the skills of their tech workforce is crucial to helping future-proof their operations against technological disruption. Those who invest in the right skills — and help their workforce gain new skills — are likely to remain ahead of the wave of digital transformation,” says Ryan Sutton, a technology hiring and consulting expert at Robert Half. Developing the skills necessary to support transformation initiatives builds business resiliency. By anticipating future skills needs, IT leaders can ensure their organizations have the right training programs in place to upskill workers as necessary, Sutton says. ... “The best way for IT leaders to know which skills gap would be a threat is by establishing a strategic workforce plan connected to changing business demands. Some organizations are getting better at building databases that track employee skills in real-time as opposed to relying on job descriptions, which may not always be accurate or updated. It’s time to understand what skills exist on your team to help identify gaps,” says Jose Ramirez, director analyst at Gartner.


Data centre trends: Is it possible to digitalise and decarbonise?

It can be difficult to balance the push for digitalisation and tech progress with the need for sustainability with the climate crisis bearing down. Add in developing regulation, cybersecurity and the need to upgrade infrastructure and there are a lot of factors for IT teams to consider right now. ... Lantry also argues that digitalisation can be a pathway to sustainability, rather than a barrier to it, as businesses “adopting digital-first strategies” can help achieve their environmental, social and governance (ESG) objectives. “By integrating these practices, IT leaders can ensure that their digital transformation initiatives align with their sustainability goals,” he said. When Google revealed its significant rise in emissions earlier this year, it described its own climate neutral goals as “extremely ambitious” and said that it “won’t be easy”. But the tech giant also claimed that technology like AI can play a “critical enabling role” in helping the world to reach a “low-carbon future” by aiding in various environmental tasks. Lantry had a similar view when it comes to the potential benefits of the broader data centre sector.



Quote for the day:

"Education is the ability to listen to almost anything without losing your temper or your self-confidence." -- Robert Frost

Daily Tech Digest - July 18, 2024

The Critical Role of Data Cleaning

Data cleaning is a crucial step that eliminates irrelevant data, identifies outliers and duplicates, and fixes missing values. It involves removing errors, inconsistencies, and, sometimes, even biases from raw data to make it usable. While buying pre-cleaned data can save resources, understanding the importance of data cleaning is still essential. Inaccuracies can significantly impact results. In many cases, before the removal of low-value data, the rest is still hardly usable. Cleaning works as a filter, ensuring that data passes through to the next step, which is more refined and relevant to your goals. ... At its core, data cleaning is the backbone of robust and reliable AI applications. It helps guard against inaccurate and biased data, ensuring AI models and their findings are on point. Data scientists depend on data cleaning techniques to transform raw data into a high-quality, trustworthy asset. ... Interestingly, LLMs that have been properly trained on clean data can play a significant role in the data cleaning process itself. Their advanced capabilities enable LLMs to automate and enhance various data cleaning tasks, making the process more efficient and effective.


What Is Paravirtualization?

Paravirtualization builds upon traditional virtualization by offering extra services, improved capabilities or better performance to guest operating systems. With traditional virtualization, organizations abstract the underlying resources via virtual machines to the guest so they can run them as is, says Greg Schulz, founder of the StorageIO Group, an IT industry analyst consultancy. However, those virtual machines use all of the resources assigned to them, meaning there is a great deal of idle time, even though it doesn’t appear so, according to Kalvar. Paravirtualization uses software instruction to dynamically size and resize those resources, Kalvar says, turning VMs into bundles of resources. They are managed by the hypervisor, a software component that manages multiple virtual machines in a computer. ... One of the biggest advantages of paravirtualization is that it is typically more efficient than full virtualization because the hypervisor can closely manage and optimize resources between different operating systems. Users can manage the resources they consume on a granular basis. “I’m not buying an hour of a server, I’m buying seconds of resource time,” Kalvar says. 


Leaked Access Keys: The Silent Revolution in Cloud Security

The challenge for service accounts is that MFA does not work, and network-level protection (IP filtering, VPN tunneling, etc.) is not consequently applied, primarily due to complexity and costs. Thus, service account key leaks often enable hackers to access company resources. While phishing is unusual in the context of service accounts, leakages are frequently the result of developers posting them (unintentionally) online, often in combination with code fragments that unveil the user to whom they apply. ... Now, Google has changed the game with its recent policy change. If an access key appears in a public GitHub repository, GCP deactivates the key, no matter whether applications crash. Google's announcement marks a shift in the risk and priority tango. Gone are the days when patching vulnerabilities could take days or weeks. Welcome to the fast-paced cloud era. Zero-second attacks after credential leakages demand zero-second fixing. Preventing an external attack becomes more important than avoiding crashing customer applications – that is at least Google's opinion. 


Juniper advances AI networking software with congestion control, load balancing

On the load balancing front, Juniper has added support for dynamic load balancing (DLB) that selects the optimal network path and delivers lower latency, better network utilization, and faster job completion times. From the AI workload perspective, this results in better AI workload performance and higher utilization of expensive GPUs, according to Sanyal. “Compared to traditional static load balancing, DLB significantly enhances fabric bandwidth utilization. But one of DLB’s limitations is that it only tracks the quality of local links instead of understanding the whole path quality from ingress to egress node,” Sanyal wrote. “Let’s say we have CLOS topology and server 1 and server 2 are both trying to send data called flow-1 and flow-2, respectively. In the case of DLB, leaf-1 only knows the local links utilization and makes decisions based solely on the local switch quality table where local links may be in perfect state. But if you use GLB, you can understand the whole path quality where congestion issues are present within the spine-leaf level.”


Impact of AI Platforms on Enhancing Cloud Services and Customer Experience

AI platforms enable businesses to streamline operations and reduce costs by automating routine tasks and optimizing resource allocation. Predictive analytics, powered by AI, allows for proactive maintenance and issue resolution, minimizing downtime and ensuring continuous service availability. This is particularly beneficial for industries where uninterrupted access to cloud services is critical, such as finance, healthcare, and e-commerce. ... AI platforms are not only enhancing backend operations but are also revolutionizing customer interactions. AI-driven customer service tools, such as chatbots and virtual assistants, provide instant support, personalized recommendations, and seamless user experiences. These tools can handle a wide range of customer queries, from basic information requests to complex problem-solving, thereby improving customer satisfaction and loyalty. The efficiency and round-the-clock availability of AI-driven tools make them invaluable for businesses. By the year 2025, it is expected that AI will facilitate around 95% of customer interactions, demonstrating its growing influence and effectiveness.


2 Essential Strategies for CDOs to Balance Visible and Invisible Data Work Under Pressure

Short-termism under pressure is a common mistake, resulting in an unbalanced strategy. How can we, as data leaders, successfully navigate such a scenario? “Working under pressure and with limited trust from senior management can force first-time CDOs to commit to an unbalanced strategy, focusing on short-term, highly visible projects – and ignore the essential foundation.” ... The desire to invest in enabling topics stems from the balance between driving and constraining forces. The senior management tends to ignore enabling topics because they rarely directly contribute to the bottom line; they can be a black box to a non-technical person and require multiple teams to collaborate effectively. On the other hand, Anne knew that the same people eagerly anticipated the impact of advanced analytics such as GenAI and were worried about potential regulatory risks. With the knowledge of the key enabling work packages and the motivating forces at play, Anne has everything she needs to argue for and execute a balanced long-term data strategy that does not ignore the “invisible” work required.


Gen AI Spending Slows as Businesses Exercise Caution

Generative AI has advanced rapidly over the past year, and organizations are recognizing its potential across business functions. But businesses have now taken a cautious stance regarding gen AI adoption due to steep implementation costs and concerns related to hallucinations. ... This trend reflects a broader shift away from the AI hype, and while businesses acknowledge the potential of this technology, they are also wary of the associated risks and costs, according to Michael Sinoway, CEO, Lucidworks. "The flattened spending suggests a move toward more thoughtful planning. This approach ensures AI adoption delivers real value, balancing competitiveness with cost management and risk mitigation," he said. ... Concerns regarding implementation costs, accuracy and data security have increased considerably in 2024. The number of business leaders with concerns related to implementation costs has increased 14-fold and those related to response accuracy have grown fivefold. While concerns about data security have increased only threefold, it remains the biggest worry.


CIOs are stretched more than ever before — and that’s a good thing

“Many CIOs have built years of credibility and trust by blocking and tackling the traditional responsibilities of the role,” she adds. “They’re now being brought to the conversation as business leaders to help the organization think through transformational priorities because they’re functional experts like any other executive in the C-suite.” ... “Boards want technology to improve the top and bottom line, which can be a tough balance, even if it’s one that CIOs are getting used to managing,” says Nash Squared’s White. “On the one hand, they’re being asked to promote innovation and help generate revenue, and on the other, they’re often charged with governance and security, too.” The importance of technology will only continue to increase going forward as well. Gen AI, for example, will make it possible to boost productivity while reducing costs. CyberArk’s Grossman expects the central role of digital leaders in exploiting these emerging technologies will mean high-level CIOs will be even more important in the future.


What Is a Sovereign Cloud and Who Truly Benefits From It?

A sovereign cloud is a cloud computing environment designed to help organizations comply with regulatory rules established by a particular government. This often entails ensuring that data stored within the cloud environment remains within a specific country. But it can also involve other practices, as we explain below. ... For one thing, cost. In general, cloud computing services on a sovereign cloud cost more than their equivalents on a generic public cloud. The exact pricing can vary widely depending on a number of factors, such as which cloud regions you select and which types of services you use, but in general, expect to pay a premium of at least 15% to use a sovereign cloud. A second challenge of using sovereign clouds is that in some cases your organization must undergo a vetting process to use them because some sovereign cloud providers only make their solutions available to certain types of organizations — often, government agencies or contractors that do business with them. This means you can't just create a sovereign cloud account and start launching workloads in a matter of minutes, as you could in a generic public cloud.


Securing datacenters may soon need sniffer dogs

So says Len Noe, tech evangelist at identity management vendor CyberArk. Noe told The Register he has ten implants – passive devices that are observable with a full body X-ray, but invisible to most security scanners. Noe explained he's acquired swipe cards used to access controlled premises, cloned them in his implants, and successfully entered buildings by just waving his hands over card readers. ... Noe thinks hounds are therefore currently the only reliable means of finding humans with implants that could be used to clone ID cards. He thinks dogs should be considered because attackers who access datacenters using implants would probably walk away scot-free. Noe told The Register that datacenter staff would probably notice an implant-packing attacker before they access sensitive areas, but would then struggle to find grounds for prosecution because implants aren't easily detectable – and even if they were the information they contain is considered medical data and is therefore subject to privacy laws in many jurisdictions.



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree

Daily Tech Digest - July 17, 2024

Optimization Techniques For Edge AI

Edge devices often have limited computational power, memory, and storage compared to centralised servers. Due to this, the cloud-centric ML models need to be retargeted so that they fit in the available resource budget. Further, many edge devices run on batteries, making energy efficiency a critical consideration. The hardware diversity in edge devices ranging from microcontrollers to powerful edge servers, each with different capabilities and architectures requires different model refinement and retargeting strategies. ... Many use cases involve the distributed deployment of numerous IoT or edge devices, such as CCTV cameras, working collaboratively towards specific objectives. These applications often have built-in redundancy, making them tolerant to failures, malfunctions, or less accurate inference results from a subset of edge devices. Algorithms can be employed to recover from missing, incorrect, or less accurate inputs by utilising the global information available. This approach allows for the combination of high and low accuracy models to optimise resource costs while maintaining the required global accuracy through the available redundancy.


The Cyber Resilience Act: A New Era for Mobile App Developers

Collaboration is key for mobile app developers to prepare for the CRA. They should first conduct a thorough security audit of their apps, identifying and addressing any vulnerabilities. Then, they’ll want to implement a structured plan to integrate the needed security features, based on the CRA’s checklist. It may also make sense to invest in a partnership with cybersecurity experts who can more efficiently provide more insights and help streamline this process in general. Developers cannot be expected to become top-notch security experts overnight. Working with cybersecurity firms, legal advisors and compliance experts can clarify the CRA and simplify the path to compliance and provide critical insights into best practices, regulatory jargon and tech solutions, ensuring that apps meet CRA standards and maintain innovation. It’s also important to note that keeping comprehensive records of compliance efforts is essential under the CRA. Developers should establish a clear process for documenting security measures, vulnerabilities addressed, and any breaches or other incidents that were identified and remediated. 


Sometimes the cybersecurity tech industry is its own worst enemy

One of the fundamental infosec problems facing most organizations is that strong cybersecurity depends on an army of disconnected tools and technologies. That’s nothing new — we’ve been talking about this for years. But it’s still omnipresent. ... To a large enterprise, “platform” is a code word for vendor lock-in, something organizations tend to avoid. Okay, but let’s say an organization was platform curious. It could also take many months or years for a large organization to migrate from distributed tools to a central platform. Given this, platform vendors need to convince a lot of different people that the effort will be worth it — a tall task with skeptical cybersecurity professionals. ... Fear not, for the security technology industry has another arrow in its quiver — application programming interfaces (APIs). Disparate technologies can interoperate by connecting via their APIs, thus cybersecurity harmony reigns supreme, right? Wrong! In theory, API connectivity sounds good, but it is extremely limited in practice. For it to work well, vendors have to open their APIs to other vendors. 


How to Apply Microservice Architecture to Embedded Systems

In short, the process of deploying and upgrading microservices for an embedded system has a strong dependency on the physical state of the system’s hardware. But there’s another significant constraint as well: data exchange. Data exchange between embedded devices is best implemented using a binary data format. Space and bandwidth capacity are limited in an embedded processor, so text-based formats such as XML and JSON won’t work well. Rather, a binary format such as protocol buffers or a custom binary format is better suited for communication in an MOA scenario in which each microservice in the architecture is hosted on an embedded processor. ... Many traditional distributed applications can operate without each microservice in the application being immediately aware of the overall state of the application. However, knowing the system’s overall state is important for microservices running within an embedded system. ... The important thing to understand is that any embedded system will need a routing mechanism to coordinate traffic and data exchange among the various devices that make up the system.


How to assess a general-purpose AI model’s reliability before it’s deployed

But these models, which serve as the backbone for powerful artificial intelligence tools like ChatGPT and DALL-E, can offer up incorrect or misleading information. In a safety-critical situation, such as a pedestrian approaching a self-driving car, these mistakes could have serious consequences. To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task. They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable. When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks. Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. 


The Role of Technology in Modern Product Engineering

Product engineering has seen a significant transformation with the integration of advanced technologies. Tools like Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), and Computer-Aided Engineering (CAE) have paved the way for more efficient and precise engineering processes. The early adoption of these technologies has enabled businesses to develop multi-million dollar operations, demonstrating the profound impact of technological advancements in the field. ... Deploying complex software solutions often involves customization and integration challenges. Addressing these challenges requires close client engagement, offering configurable options, and implementing phased customization. ... The future of product engineering is being shaped by technology integration, strategic geographic diversification, and the adoption of advanced methodologies like DevSecOps. As the tech landscape evolves with trends such as AI, Augmented Reality (AR), Virtual Reality (VR), IoT, and sustainable technology, continuous innovation and adaptation are essential.


A New Approach To Multicloud For The AI Era

The evolution from cost-focused to value-driven multicloud strategies marks a significant shift. Investing in multicloud is not just about cost efficiency; it's about creating an infrastructure that advances AI initiatives, spurs innovation and secures a competitive advantage. Unlike single-cloud or hybrid approaches, multicloud offers unparalleled adaptability and resource diversity, which are essential in the AI-driven business environment. Here are a few factors to consider. ... The challenge of multicloud is not simply to utilize a variety of cloud services but to do so in a way that each contributes its best features without compromising the overall efficiency and security of the AI infrastructure. To achieve this, businesses must first identify the unique strengths and offerings of each cloud provider. For instance, one platform might offer superior data analytics tools, another might excel in machine learning performance and a third might provide the most robust security features. The task is to integrate these disparate elements into a seamless whole. 


How Can Organisations Stay Secure In The Face Of Increasingly Powerful AI Attacks

One of the first steps any organisation should take when it comes to staying secure in the face of AI-generated attacks is to acknowledge a significant top-down disparity between the volume and strength of cyberattacks, and the ability of most organisations to handle them. Our latest report shows that just 58% of companies are addressing every security alert. Without the right defences in place, the growing power of AI as a cybersecurity threat could see that number slip even lower. ... Fortunately, there is a solution: low-code security automation. This technology gives security teams the power to automate tedious and manual tasks, allowing them to focus on establishing an advanced threat defence. ... There are other benefits too. These include the ability to scale implementations based on the team’s existing experience and with less reliance on coding skills. And unlike no-code tools that can be useful for smaller organisations that are severely resource-constrained, low-code platforms are more robust and customisable. This can result in easier adaptation to the needs of the business.


Time for reality check on AI in software testing

Given that AI-augmented testing tools are derived from data used to train AI models, IT leaders will also be more responsible for the security and privacy of that data. Compliance with regulations like GDPR is essential, and robust data governance practices should be implemented to mitigate the risk of data breaches or unauthorized access. Algorithmic bias introduced by skewed or unrepresentative training data must also be addressed to mitigate bias within AI-augmented testing as much as possible. But maybe we’re getting ahead of ourselves here. Because even with AI’s continuing evolution, and autonomous testing becomes more commonplace, we will still need human assistance and validation. The interpretation of AI-generated results and the ability to make informed decisions based on those results will remain a responsibility of testers. AI will change software testing for the better. But don’t treat any tool using AI as a straight-up upgrade. They all have different merits within the software development life cycle. 


Overlooked essentials: API security best practices

In my experience, there are six important indicators organizations should focus on to detect and respond to API security threats effectively – shadow APIs, APIs exposed to the internet, APIs handling sensitive data, unauthenticated APIs, APIs with authorization flaws, APIs with improper rate limiting. Let me expand on this further. Shadow APIs: Firstly, it’s important to identify and monitor shadow APIs. These are undocumented or unmanaged APIs that can pose significant security risks. Internet-exposed APIs: Limit and closely track the number of APIs accessible publicly. These are more prone to external threats. APIs handling sensitive data: APIs that process sensitive data and are also publicly accessible are among the most vulnerable. They should be prioritized for security measures. Unauthenticated APIs: An API lacking proper authentication is an open invitation to threats. Always have a catalog of unauthenticated APIs and ensure they are not vulnerable to data leaks. APIs with authorization flaws: Maintain an inventory of APIs with authorization vulnerabilities. These APIs are susceptible to unauthorized access and misuse. Implement a process to fix these vulnerabilities as a priority.



Quote for the day:

"The successful man doesn't use others. Other people use the successful man. For above all the success is of service" -- Mark Kainee