Showing posts with label healthcare. Show all posts
Showing posts with label healthcare. Show all posts

Daily Tech Digest - July 03, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


The Goldilocks Theory – preparing for Q-Day ‘just right’

When it comes to quantum readiness, businesses currently have two options: Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these, PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages principles of quantum physics, such as superposition, to securely distribute encryption keys. Although great in theory, it needs extensive new infrastructure, including bespoke networks and highly specialised hardware. More importantly, it also lacks authentication capabilities, severely limiting its practical utility. PQC, on the other hand, comprises classical cryptographic algorithms specifically designed to withstand quantum attacks. It can be integrated into existing digital infrastructures with minimal disruption. ... Imagine installing new quantum-safe algorithms prematurely, only to discover later they’re vulnerable, incompatible with emerging standards, or impractical at scale. This could have the opposite effect and could inadvertently increase attack surface and bring severe operational headaches, ironically becoming less secure. But delaying migration for too long also poses serious risks. Malicious actors could be already harvesting encrypted data, planning to decrypt it when quantum technology matures – so businesses protecting sensitive data such as financial records, personal details, intellectual property cannot afford indefinite delays.


Sovereign by Design: Data Control in a Borderless World

The regulatory framework for digital sovereignty is a national priority. The EU has set the pace with GDPR and GAIA-X. It prioritizes data residency and local infrastructure. China's cybersecurity law and personal information protection law enforce strict data localization. India's DPDP Act mandates local storage for sensitive data, aligning with its digital self-reliance vision through platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data to stay within the country for the sake of national security. Australia's privacy act focuses on data privacy, especially for health records, and Canada's PIPEDA encourages local storage for government data. Saudi Arabia's personal data protection law enforces localization for sensitive sectors, and Indonesia's personal data protection law covers all citizen-centric data. Singapore's PDPA balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's GDPR, mandates the protection of privacy and fundamental rights of its citizens. ... Tech companies have little option but to comply with the growing demands of digital sovereignty. For example, Amazon Web Services has a digital sovereignty pledge, committing to "a comprehensive set of sovereignty controls and features in the cloud" without compromising performance.


Agentic AI Governance and Data Quality Management in Modern Solutions

Agentic AI governance is a framework that ensures artificial intelligence systems operate within defined ethical, legal, and technical boundaries. This governance is crucial for maintaining trust, compliance, and operational efficiency, especially in industries such as Banking, Financial Services, Insurance, and Capital Markets. In tandem with robust data quality management, Agentic AI governance can substantially enhance the reliability and effectiveness of AI-driven solutions. ... In industries such as Banking, Financial Services, Insurance, and Capital Markets, the importance of Agentic AI governance cannot be overstated. These sectors deal with vast amounts of sensitive data and require high levels of accuracy, security, and compliance. Here’s why Agentic AI governance is essential: Enhanced Trust: Proper governance fosters trust among stakeholders by ensuring AI systems are transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and regulatory requirements helps avoid penalties and safeguard against legal risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI governance enhances overall operational efficiency and decision-making. Protection of Sensitive Data: Robust governance frameworks protect sensitive financial data from breaches and misuse, ensuring privacy and security. 


Fundamentals of Dimensional Data Modeling

Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. Data modelers organize these facts and descriptive dimensions into separate tables within the data warehouse, aligning them with the different subject areas and business processes. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. Additionally, dimensional data modeling proves to be flexible as business needs evolve. The data warehouse updates technology according to the concept of slowly changing dimensions (SCD) as business contexts emerge. ... Alignment in the design requires these processes, and data governance plays an integral role in getting there. Once the organization is on the same page about the dimensional model’s design, it chooses the best kind of implementation. Implementation choices include the star or snowflake schema around a fact. When organizations have multiple facts and dimensions, they use a cube. A dimensional model defines how technology needs to build a data warehouse architecture or one of its components using good design and implementation.


IDE Extensions Pose Hidden Risks to Software Supply Chain

The latest research, published this week by application security vendor OX Security, reveals the hidden dangers of verified IDE extensions. While IDEs provide an array of development tools and features, there are a variety of third-party extensions that offer additional capabilities and are available in both official marketplaces and external websites. ... But OX researchers realized they could add functionality to verified extensions after the fact and still maintain the checkmark icon. After analyzing traffic for Visual Studio Code, the researchers found a server request to the marketplace that determines whether the extension is verified; they discovered they could modify the values featured in the server request and maintain the verification status even after creating malicious versions of the approved extensions. ... Using this attack technique, a threat actor could inject malicious code into verified and seemingly safe extensions that would maintain their verified status. "This can result in arbitrary code execution on developers' workstations without their knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote. "Therefore, relying solely on the verified symbol of extensions is inadvisable." ... "It only takes one developer to download one of these extensions," he says. "And we're not talking about lateral movement. ..."


Business Case for Agentic AI SOC Analysts

A key driver behind the business case for agentic AI in the SOC is the acute shortage of skilled security analysts. The global cybersecurity workforce gap is now estimated at 4 million professionals, but the real bottleneck for most organizations is the scarcity of experienced analysts with the expertise to triage, investigate, and respond to modern threats. One ISC2 survey report from 2024 shows that 60% of organizations worldwide reported staff shortages significantly impacting their ability to secure the organizations, with another report from the World Economic Forum showing that just 15% of organizations believe they have the right people with the right skills to properly respond to a cybersecurity incident. Existing teams are stretched thin, often forced to prioritize which alerts to investigate and which to leave unaddressed. As previously mentioned, the flood of false positives in most SOCs means that even the most experienced analysts are too distracted by noise, increasing exposure to business-impacting incidents. Given these realities, simply adding more headcount is neither feasible nor sustainable. Instead, organizations must focus on maximizing the impact of their existing skilled staff. The AI SOC Analyst addresses this by automating routine Tier 1 tasks, filtering out noise, and surfacing the alerts that truly require human judgment. 


Microservice Madness: Debunking Myths and Exposing Pitfalls

Microservices will reduce dependencies, because it forces you to serialize your types into generic graph objects (read; JSON or XML or something similar). This implies that you can just transform your classes into a generic graph object at its interface edges, and accomplish the exact same thing. ... There are valid arguments for using message brokers, and there are valid arguments for decoupling dependencies. There are even valid points of scaling out horizontally by segregating functionality on to different servers. But if your argument in favor of using microservices is "because it eliminates dependencies," you're either crazy, corrupt through to the bone, or you have absolutely no idea what you're talking about (make your pick!) Because you can easily achieve the same amount of decoupling using Active Events and Slots, combined with a generic graph object, in-process, and it will execute 2 billion times faster in production than your "microservice solution" ... "Microservice Architecture" and "Service Oriented Architecture" (SOA) have probably caused more harm to our industry than the financial crisis in 2008 caused to our economy. And the funny thing is, the damage is ongoing because of people repeating mindless superstitious belief systems as if they were the truth.


Sustainability and social responsibility

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage the entire thermal load. That’s why hybrid systems that combine liquid and traditional air cooling are increasingly popular. These systems offer the ability to fine-tune energy use, reduce reliance on mechanical cooling, and optimize server performance. HiRef offers advanced cooling distribution units (CDUs) that integrate liquid-cooled servers with heat exchangers and support infrastructure like dry coolers and dedicated high-temperature chillers. This integration ensures seamless heat management regardless of local climate or load fluctuations. ... With liquid cooling systems capable of operating at higher temperatures, facilities can increasingly rely on external conditions for passive cooling. This shift not only reduces electricity usage, but also allows for significant operational cost savings over time. But this sustainable future also depends on regulatory compliance, particularly in light of the recently updated F-Gas Regulation, which took effect in March 2024. The EU regulation aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t directly tailored to the data center sector,” explains Poletto.


Infrastructure Operators Leaving Control Systems Exposed

Threat intelligence firm Censys has scanned the internet twice a month for the last six months, looking for a representative sample composed of four widely used types of ICS devices publicly exposed to the internet. Overall exposure slightly increased from January through June, the firm said Monday. One of the devices Censys scanned for is programmable logic controllers made by an Israel-based Unitronics. The firm's Vision-series devices get used in numerous industries, including the water and wastewater sector. Researchers also counted publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco Veeder-Root - that run SiteOmat fuel station automation software. It also looked for devices made by Red Lion that are widely deployed for factory and process automation, as well as in oil and gas environments. It additionally probed for instances of a facilities automation software framework known as Niagara, made by Tridium. ... Report author Emily Austin, principal security researcher at Censys, said some fluctuation over time isn't unusual, given how "services on the internet are often ephemeral by nature." The greatest number of publicly exposed systems were in the United States, except for Unitronics, which are also widely used in Australia.


Healthcare CISOs must secure more than what’s regulated

Security must be embedded early and consistently throughout the development lifecycle, and that requires cross-functional alignment and leadership support. Without an understanding of how regulations translate into practical, actionable security controls, CISOs can struggle to achieve traction within fast-paced development environments. ... Security objectives should be mapped to these respective cycles—addressing tactical issues like vulnerability remediation during sprints, while using PI planning cycles to address larger technical and security debt. It’s also critical to position security as an enabler of business continuity and trust, rather than a blocker. Embedding security into existing workflows rather than bolting it on later builds goodwill and ensures more sustainable adoption. ... The key is intentional consolidation. We prioritize tools that serve multiple use cases and are extensible across both DevOps and security functions. For example, choosing solutions that can support infrastructure-as-code security scanning, cloud posture management, and application vulnerability detection within the same ecosystem. Standardizing tools across development and operations not only reduces overhead but also makes it easier to train teams, integrate workflows, and gain unified visibility into risk.

Daily Tech Digest - February 04, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Technology skills gap plagues industries, and upskilling is a moving target

“The deepening threat landscape and rapidly evolving high-momentum technologies like AI are forcing organizations to move with lightning speed to fill specific gaps in their job architectures, and too often they are stumbling,” said David Foote, chief analyst at consultancy Foote Partners. To keep up with the rapidly changing landscape, Gartner suggests that organizations invest in agile learning for tech teams. “In the context of today’s AI-fueled accelerated disruption, many business leaders feel learning is too slow to respond to the volume, variety and velocity of skills needs,” said Chantal Steen, a senior director in Gartner’s HR practice. “Learning and development must become more agile to respond to changes faster and deliver learning more rapidly and more cost effectively.” Studies from staffing firm ManpowerGroup, hiring platform Indeed, and Deloitte consulting show that tech hiring will focus on candidates with flexible skills to meet evolving demands. “Employers know a skilled and adaptable workforce is key to navigating transformation, and many are prioritizing hiring and retaining people with in-demand flexible skills that can flex to where demand sits,” said Jonas Prising, ManpowerGroup chair and CEO.


Mixture of Experts (MoE) Architecture: A Deep Dive & Comparison of Top Open-Source Offerings

The application of MoE to open-source LLMs offers several key advantages. Firstly, it enables the creation of more powerful and sophisticated models without incurring the prohibitive costs associated with training and deploying massive, single-model architectures. Secondly, MoE facilitates the development of more specialized and efficient LLMs, tailored to specific tasks and domains. This specialization can lead to significant improvements in performance, accuracy, and efficiency across a wide range of applications, from natural language translation and code generation to personalized education and healthcare. The open-source nature of MoE-based LLMs promotes collaboration and innovation within the AI community. By making these models accessible to researchers, developers, and businesses, MoE fosters a vibrant ecosystem of experimentation, customization, and shared learning. ... Integrating MoE architecture into open-source LLMs represents a significant step forward in the evolution of artificial intelligence. By combining the power of specialization with the benefits of open-source collaboration, MoE unlocks new possibilities for creating more efficient, powerful, and accessible AI models that can revolutionize various aspects of our lives.


The DeepSeek Disruption and What It Means for CIOs

The emergence of DeepSeek has also revived a long-standing debate about open-source AI versus proprietary AI. Open-source AI is not a silver bullet. CIOs need to address critical risks as open-source AI models, if not secured properly, can be exposed to grave cyberthreats and adversarial attacks. While DeepSeek currently shows extraordinary efficiency, it requires an internal infrastructure, unlike GPT-4, which can seamlessly scale on OpenAI's cloud. Open-source AI models lack support and skills, thereby mandating users to build their own expertise, which could be demanding. "What happened with DeepSeek is actually super bullish. I look at this transition as an opportunity rather than a threat," said Steve Cohen, founder of Point72. ... The regulatory non-compliance adds another challenge as many governments restrict and disallow sensitive enterprise data from being processed by Chinese technologies. A possibility of potential backdoor can't be ruled out and this could open the enterprises to additional risks. CIOs need to conduct extensive security audits before deploying DeepSeek. rganizations can implement safeguards such as on-premises deployment to avoid data exposure. Integrating strict encryption protocols can help the AI interactions remain confidential, and performing rigorous security audits ensure the model's safety before deploying it into business workflows.


Why GreenOps will succeed where FinOps is failing

The cost-control focus fails to engage architects and engineers in rethinking how systems are designed, built and operated for greater efficiency. This lack of engagement results in inertia and minimal progress. For example, the database team we worked with in an organization new to the cloud launched all the AWS RDS database servers from dev through production, incurring a $600K a month cloud bill nine months before the scheduled production launch. The overburdened team was not thinking about optimizing costs, but rather optimizing their own time and getting out of the way of the migration team as quickly as possible. ... GreenOps — formed by merging FinOps, sustainability and DevOps — addresses the limitations of FinOps while integrating sustainability as a core principle. Green computing contributes to GreenOps by emphasizing energy-efficient design, resource optimization and the use of sustainable technologies and platforms. This foundational focus ensures that every system built under GreenOps principles is not only cost-effective but also minimizes its environmental footprint, aligning technological innovation with ecological responsibility. Moreover, we’ve found that providing emissions feedback to architects and engineers is a bigger motivator than cost to inspire them to design more efficient systems and build automation to shut down underutilized resources.


Best Practices for API Rate Limits and Quotas

Unlike short-term rate limits, the goal of quotas is to enforce business terms such as monetizing your APIs and protecting your business from high-cost overruns by customers. They measure customer utilization of your API over longer durations, such as per hour, per day, or per month. Quotas are not designed to prevent a spike from overwhelming your API. Rather, quotas regulate your API’s resources by ensuring a customer stays within their agreed contract terms. ... Even a protection mechanism like rate limiting could have errors. For example, a bad network connection with Redis could cause reading rate limit counters to fail. In such scenarios, it’s important not to artificially reject all requests or lock out users even though your Redis cluster is inaccessible. Your rate-limiting implementation should fail open rather than fail closed, meaning all requests are allowed even though the rate limit implementation is faulting. This also means rate limiting is not a workaround to poor capacity planning, as you should still have sufficient capacity to handle these requests or even design your system to scale accordingly to handle a large influx of new requests. This can be done through auto-scale, timeouts, and automatic trips that enable your API to still function.


Protecting Ultra-Sensitive Health Data: The Challenges

Protecting ultra-sensitive information "is an incredibly confusing and complicated and evolving part of the law," said regulatory attorney Kirk Nahra of the law firm WilmerHale. "HIPAA generally does not distinguish between categories of health information," he said. "There are exceptions - including the recent Dobbs rule - but these are not fundamental in their application, he said. Privacy protections related to abortion procedures are perhaps the most hotly debated type of patient information. For instance, last June - in response to the June 2022 Supreme Court's Dobbs ruling, which overturned the national right to abortion - the Biden administration's U.S. Department of Health and Human Services modified the HIPAA Privacy Rule to add additional safeguards for the access, use and disclosure of reproductive health information. The rule is aimed at protecting women from the use or disclosure of their reproductive health information when it is sought to investigate or impose liability on individuals, healthcare providers or others who seek, obtain, provide or facilitate reproductive healthcare that is lawful under the circumstances in which such healthcare is provided. But that rule is being challenged in federal court by 15 state attorneys general seeking to revoke the regulations.


Evolving threat landscape, rethinking cyber defense, and AI: Opportunties and risk

Businesses are firmly in attackers’ crosshairs. Financially motivated cybercriminals conduct ransomware attacks with record-breaking ransoms being paid by companies seeking to avoid business interruption. Others, including nation-state hackers, infiltrate companies to steal intellectual property and trade secrets to gain commercial advantage over competitors. Further, we regularly see critical infrastructure being targeted by nation-state cyberattacks designed to act as sleeper cells that can be activated in times of heightened tension. Companies are on the back foot. ... As zero trust disrupts obsolete firewall and VPN-based security, legacy vendors are deploying firewalls and VPNs as virtual machines in the cloud and calling it zero trust architecture. This is akin to DVD hardware vendors deploying DVD players in a data center and calling it Netflix! It gives a false sense of security to customers. Organizations need to make sure they are really embracing zero trust architecture, which treats everyone as untrusted and ensures users connect to specific applications or services, rather than a corporate network. ... Unfortunately, the business world’s harnessing of AI for cyber defense has been slow compared to the speed of threat actors harnessing it for attacks. 


Six essential tactics data centers can follow to achieve more sustainable operations

By adjusting energy consumption based on real-time demand, data centers can significantly enhance their operational efficiency. For example, during periods of low activity, power can be conserved by reducing energy use, thus minimizing waste without compromising performance. This includes dynamic power management technologies in switch and router systems, such as shutting down unused line cards or ports and controlling fan speeds to optimize energy use based on current needs. Conversely, during peak demand, operations can be scaled up to meet increased requirements, ensuring consistent and reliable service levels. Doing so not only reduces unnecessary energy expenditure, but also contributes to sustainability efforts by lowering the environmental impact associated with energy-intensive operations. ... Heat generated from data center operations can be captured and repurposed to provide heating for nearby facilities and homes, transforming waste into a valuable resource. This approach promotes a circular energy model, where excess heat is redirected instead of discarded, reducing the environmental impact. Integrating data centers into local energy systems enhances sustainability and offers tangible benefits to surrounding areas and communities whilst addressing broader energy efficiency goals.


The Engineer’s Guide to Controlling Configuration Drift

“Preventing configuration drift is the bedrock for scalable, resilient infrastructure,” comments Mayank Bhola, CTO of LambdaTest, a cloud-based testing platform that provides instant infrastructure. “At scale, even small inconsistencies can snowball into major operational inefficiencies. We encountered these challenges [user-facing impact] as our infrastructure scaled to meet growing demands. Tackling this challenge head-on is not just about maintaining order; it’s about ensuring the very foundation of your technology is reliable. And so, by treating infrastructure as code and automating compliance, we at LambdaTest ensure every server, service, and setting aligns with our growth objectives, no matter how fast we scale. Adopting drift detection and remediation strategies is imperative for maintaining a resilient infrastructure. ... The policies you set at the infrastructure level, such as those for SSH access, add another layer of security to your infrastructure. Ansible allows you to define policies like removing root access, changing the default SSH port, and setting user command permissions. “It’s easy to see who has access and what they can execute,” Kampa remarks. “This ensures resilient infrastructure, keeping things secure and allowing you to track who did what if something goes wrong.”


Strategies for mitigating bias in AI models

The need to address bias in AI models stems from the fundamental principle of fairness. AI systems should treat all individuals equitably, regardless of their background. However, if the training data reflects existing societal biases, the model will likely reproduce and even exaggerate those biases in its outputs. For instance, if a facial recognition system is primarily trained on images of one demographic, it may exhibit lower accuracy rates for other groups, potentially leading to discriminatory outcomes. Similarly, a natural language processing model trained on predominantly Western text may struggle to understand or accurately represent nuances in other languages and cultures. ... Incorporating contextual data is essential for AI systems to provide relevant and culturally appropriate responses. Beyond basic language representation, models should be trained on datasets that capture the history, geography, and social issues of the populations they serve. For instance, an AI system designed for India should include data on local traditions, historical events, legal frameworks, and social challenges specific to the region. This ensures that AI-generated responses are not only accurate but also culturally sensitive and context-aware. Additionally, incorporating diverse media formats such as text, images, and audio from multiple sources enhances the model’s ability to recognise and adapt to varying communication styles.

Daily Tech Digest - August 30, 2024

Balancing AI Innovation and Tech Debt in the Cloud

While AI presents incredible opportunities for innovation, it also sheds light on the need to reevaluate existing governance awareness and frameworks to include AI-driven development. Historically DORA metrics were introduced to quantify elite engineering organizations based on two critical categories of speed and safety. Speed alone does not indicate elite engineering if the safety aspects are disregarded altogether. AI development cannot be left behind when considering the safety of AI-driven applications. Running AI applications according to data privacy, governance, FinOps and policy standards is critical now more than ever, before this tech debt spirals out of control and data privacy is infringed upon by machines that are no longer in human control. Data is not the only thing at stake, of course. Costs and breakage should also be a consideration. If the CrowdStrike outage from last month has taught us anything it’s that even seemingly simple code changes can bring down entire mission-critical systems at a global scale when not properly released and governed. This involves enforcing rigorous data policies, cost-conscious policies, compliance checks and comprehensive tagging of AI-related resources.


AI and Evolving Legislation in the US and Abroad

The best way to prepare for regulatory changes is to get your house in order. Most crucial is having an AI and data governance structure. This should be part of the overall product development lifecycle so that you’re thinking about how data and AI is being used from the very beginning. Some best practices for governance include: Forming a cross-functional committee to evaluate the strategic use of data and AI products; Ensuring you have experts from different domains working together to design algorithms that produce output that is relevant, useful and compliant; Implementing a risk assessment program to determine what risks are at issue for each use case; Executing an internal and external communication plan to inform about how AI is being used in your company and the safeguards you have in place. AI has become a significant, competitive factor in product development. As businesses develop their AI program, they should continue to abide by responsible and ethical guidelines to help them stay compliant with current and emerging legislation. Companies that follow best practices for responsible use of AI will be well-positioned to navigate current rules and adapt as regulations evolve.


The paradox of chaos engineering

Although chaos engineering offers potential insights into system robustness, enterprises must scrutinize its demands on resources, the risks it introduces, and its alignment with broader strategic goals. Understanding these factors is crucial to deciding whether chaos engineering should be a focal area or a supportive tool within an enterprise’s technological strategy. Each enterprise must determine how closely to follow this technological evolution and how long to wait for their technology provider to offer solutions. ... Chaos engineering offers a proactive defense mechanism against system vulnerabilities, but enterprises must weigh its risks against their strategic goals. Investing heavily in chaos engineering might be justified for some, particularly in sectors where uptime and reliability are crucial. However, others might be better served by focusing on improvements in cybersecurity standards, infrastructure updates, and talent acquisition. Also, what will the cloud providers offer? Many enterprises get into public clouds because they want to shift some of the work to the providers, including reliability engineering. Sometimes, the shared responsibility model is too focused on the desire of the cloud providers rather than their tenants. You may need to step it up, cloud providers.


Generative AI vs large language models: What’s the difference?

While generative AI has become popular for content generation more broadly, LLMs are making a massive impact on the development of chatbots. This allows companies to provide more useful responses to real-time customer queries. However, there are differences in the approach. A basic generative AI chatbot, for example, would answer a question with a set answer taken from a stock of responses upon which it has been trained. Introducing an LLM as part of the chatbot set-up means its response will become much more detailed and reactive and just like the reply has come from a human advisor, instead of from a computer. This is quickly becoming a popular option, with firms such as JP Morgan embracing LLM chatbots to improve internal productivity. Other useful implementations of LLMs are to generate or debug code in software development or to carry out brainstorms or research tasks by tapping into various online sources for suggestions. This ability is made possible by another related AI technology called retrieval augmented generation (RAG), in which LLMs draw on vectorized information outside of its training data to root responses in additional context and improve their accuracy.


Agentic AI: Decisive, operational AI arrives in business

Agentic AI, at its core, is designed to automate a specific function within an organization’s myriad business processes, without human intervention. AI agents can, for example, handle customer service issues, such as offering a refund or replacement, autonomously, and they can identify potential threats on an organization’s network and proactively take preventive measures. ... Cognitive AI agents can also serve as assistants in the healthcare setting by engaging with a patient daily to support mental healthcare treatment, and as student recruiters at universities, says Michelle Zhou, founder of Juji AI agents and an inventor of IBM Watson Personality Insights. The AI recruiter could ask prospective students about their purpose of visit, address their top concerns, infer the students’ academic interests and strengths, and advise them on suitable programs that match their interests, she says. ... The key to getting the most value out of AI agents is getting out of the way, says Jacob Kalvo, co-founder and CEO of Live Proxies, a provider of advanced proxy solutions. “Where agentic AI truly unleashes its power is in the ability to act independently,” he says. 


Protecting E-Commerce Businesses Against Disruptive AI-driven Bot Threats

Bot attacks have long been a thorn in the side of e-commerce platforms. With the growing number of shoppers regularly interacting and sharing their data on retail websites combined with high transaction volumes and a growing attack surface, these online businesses have been a lucrative target for cybercriminal activity. From inventory hoarding, account takeover, and credential stuffing to price scraping and fake account creation, these automated threats have often caused significant damage to e-commerce operations. By using a variety of sophisticated evasion techniques in distributed bot attacks such as rapidly rotating IPs and identities and manipulating HTTP headers to appear as legitimate requests, attackers have been able to evade detection by traditional bot detection tools.  ... With the evolution of Generative AI models and its increasing adoption by bot operators, bot attacks are expected to become even more sophisticated and aggressive in nature. In the future, Gen AI-based bots could be able to independently learn, communicate with other bots, and adapt in real-time to an application’s defensive mechanisms. 


How copilot is revolutionising business process automation and efficiency

Copilot is essential for optimising operations in addition to increasing productivity. Companies frequently struggle with inefficiencies brought on by human error and manual processes. Copilot ensures seamless operations and lowers the possibility of errors by automating these activities. For instance, automation of customer service. According to a survey, 72% of consumers believe that agents should automatically be aware of their personal information and service history. Customer relationship management (CRM) systems can incorporate Copilot to give agents real-time information and recommendations, guaranteeing a customised and effective service experience. The efficiency of customer support operations is further enhanced by intelligent routing of questions and automated responses. ... For example, Copilot can forecast performance, assess market trends, and provide investment recommendations in the financial industry. Deloitte claims that artificial intelligence (AI) can save operating costs in the finance sector by as much as 20%. Copilot’s automated data analysis and accurate recommendation engine help financial organisations remain ahead of the curve and confidently make strategic decisions.


Is your data center earthquake-proof?

Leuce explains that when Colt DCS designs the layout of a data center, it ensures the most critical parts, such as the data halls, electrical rooms, and other ancillary rooms required for business continuity, are placed on the isolation base. Other elements, such as generators, which are often designed to withstand an earthquake, can then be placed directly on the ground. ... A final technique employed by Colt DCS is the use of dampers – hydraulic devices that dissipate the kinetic energy of seismic events and cushion the impact between structures. Having previously deployed lead dampers at its first data center in Inzai, Japan, Colt’s has gone a step further at its most recently built facility in Keihanna, Japan, where it is using a combination of an oil damper made out of naturally laminated rubber plus a friction pendulum system, a type of base isolation that allows you to damp both vertically and horizontally. “The reason why we mix the friction pendulum with the oil damper is because with the oil damper, you can actually control the frequency in the harmonics pulsation of the building, depending on the viscosity of the oil, while the friction pendulum does the job of dampening the energy in both directions, so you bring both technologies together,” Leuce explains.


Digital IDV standards, updated regulation needed to fight sophisticated cybercrime

In the face of rising fraud and technological advancements, there is a growing consensus on the need for innovative approaches to financial security. As argued in a recent Forbes article, the upcoming election season presents an opportunity to rethink the ecosystem that supports financial innovation. In the article, Penny Lee, president and CEO of the Financial Technology Association (FTA), advocates for policies that foster technological advancements while ensuring robust regulatory frameworks to protect consumers from emerging threats. ... Amidst these challenges, the payments industry is experiencing a surge in innovation aimed at combating fraud and enhancing security. Real-time payments and secure digital identity systems are at the forefront of these efforts. The U.S. Payments Forum Summer Market Snapshot highlights a growing interest in real-time payments systems, which enable instant transfer of funds and provide businesses and consumers with immediate access to their money. These systems are designed to improve cash flow management and reduce the risk of fraud through enhanced authentication measures.


Transformer AI Is The Heathcare System's Way Forward

Transformer-based LLMs are adapting quickly to the amount of medical information the NHS deals with per patient and on a daily basis. The size of the ‘context windows’, or input, is expanding to accommodate larger patient files, critical for quick analysis of medical notes and more efficient decision making by clinical teams. Beyond speed, these models serve well for quality of output, which can lead to more optimal patient care. An ‘attention mechanism’ learns how different inputs relate to each other. In a medical context, this can include the interactions of different drugs in a patient’s record. It can find relationships between medicines and certain allergies, predicting the outcome of this interaction on the patient’s health. As more patient records become electronic, the larger training sets will allow LLMs to become more accurate. These AI models can do what takes humans hours of manual effort – sifting through patient notes, interpreting medical records and family history and understanding relationships between previous conditions and treatments. The benefit of having this system in place is that it creates a full, contextual picture of a patient that helps clinical teams make quick decisions about treatment and counsel.



Quote for the day:

"Are you desperate or determined? With desperation comes frustration; With determination comes purpose achievement, and peace." -- James A. Murphy

Daily Tech Digest - August 28, 2024

Improving healthcare fraud prevention and patient trust with digital ID

Digital trust involves the use of secure and transparent technologies to protect patient data while enhancing communication and engagement. For example, digital consent forms and secure messaging platforms allow patients to communicate with their healthcare providers conveniently while ensuring that their data remains protected. Furthermore, integrating digital trust technology into healthcare systems can streamline administrative processes, reduce paperwork, and minimize the chances of errors, according to a blog post by Five Faces. This not only enhances operational efficiency but also improves the overall patient experience by reducing wait times and simplifying access to medical services. ... These smart cards, embedded with secure microchips, store vital patient information and health insurance details, enabling healthcare providers to access accurate and up-to-date information during consultations. The use of chip-based ID cards reduces the risk of identity theft and fraud, as these cards are difficult to duplicate and require secure authentication methods. This technology ensures that only authorized individuals can access patient information, thereby protecting sensitive data from unauthorized access.


A CEO's Take on AI in the Workforce

Those ignoring the AI transformation and not uptraining their skilled staff are not putting themselves in a position to make use of untapped data that can provide insights into other areas of opportunity for their business. Making minimal-to-no investments in emerging technology merely delays the inevitable and puts companies at a disadvantage at the hands of their competitors. Alternatively, being too aggressive with AI can lead to security vulnerabilities or critical talent loss. While AI integration is critical to accelerating business outputs, doing so without moderators, data safeguards, and regulators to keep organizations in line with data governance and compliance is actually exposing companies to security issues. ... AI should not replace people, but rather presents an opportunity to better utilize them. AI can help solve time-management and efficiency issues across organizations, allowing skilled people to focus on creative and strategic roles or projects that drive better business value. The role of AI should focus on automating time-consuming, repetitive, administrative tasks, thereby leaving individuals to be more calculated and intentional with their time.


The promise of open banking: How data sharing is changing financial services

The benefits of open banking are multifaceted. Customers gain greater control over their financial data, allowing them to securely share it with authorized providers. This empowers them to explore a wider range of customized financial products and services, ultimately promoting financial stability and well-being. Additionally, open banking fosters innovation within the industry, as Fintech companies leverage customer-consented data to develop cutting-edge solutions. The Account Aggregator (AA) framework, regulated by the Reserve Bank of India (RBI), is a cornerstone of open banking in India. AAs act as trusted intermediaries, allowing users to consolidate their financial data from various sources, including banks, mutual funds, and insurance companies, into a single platform. ... APIs empower platforms to aggregate FD offerings from a multitude of banks across India. This provides investors with a comprehensive view of available options, allowing them to compare interest rates, tenures, minimum deposit requirements, and other features within a single platform. This transparency empowers informed decision-making, enabling investors to select the FD that best aligns with their risk appetite and financial goals.


What are the realistic prospects for grid-independent AI data centers in the UK?

Already colo companies looking to develop in the UK are evaluating on-site gas engine power generation and CHP (combined heat and power). To date, UK CHP projects have been hampered by a lack of grid capacity. Microgrid developments are viewed as a solution to this. CHP and microgrids should also make data center developments more appealing for local government planning departments. ... Data center developments have hit front-line politics with Rachel Reeves, the new UK Labour government’s Chancellor of the Exchequer (Finance Minister) citing data center infrastructure and reform of planning law as critical to growing the country’s economy. Already some projects that were denied planning permission look likely to be reconsidered with reports that “Deputy Prime Minister Angela Rayner" had “recovered two planning appeals for data centers in Buckinghamshire and Hertfordshire (already)”. It seems clear that to have any realistic chance of meeting data center capacity demand for AI, cloud and other digital services will require on-site power generation in some form or other. 


Why Every IT Leader Needs a Team of Trusted Advisors

When seeking advisors, look for individuals with the time and willingness to join your kitchen cabinet, Kelley says. "Be mindful of their schedules and obligations, since they are doing you a favor," he notes. Additionally, if you're offering any perks, such as paid meals, travel reimbursement, or direct monetary payments, let them know upfront. Such bonuses are relatively rare, however. "More than likely, you’re talking about individual or small group phone calls or meetings." Above all, be honest and open with your team members. "Let them know what kind of help you need and the time frame you are working under," Kelley says. "If you've heard different or contradictory advice from other sources, bring it up and get their reaction," he recommends. Keep in mind that an advisory team is a two-way relationship. Kelley recommends personalizing each connection with an occasional handwritten note, book, lunch, or ticket to a concert or sporting event. On the other hand, if you decide to ignore their input or advice, you need to explain why, he suggests. Otherwise, they might conclude that being a team participant is a waste of time. Also be sure to help your team members whenever they need advice or support. 


Why CI and CD Need to Go Their Separate Ways

Continuous promotion is a concept designed to bridge the gap between CI and CD, addressing the limitations of traditional CI/CD pipelines when used with modern technologies like Kubernetes and GitOps. The idea is to insert an intermediary step that focuses on promotion of artifacts based on predefined rules and conditions. This approach allows more granular control over the deployment process, ensuring that artifacts are promoted only when they meet specific criteria, such as passing certain tests or receiving necessary approvals. By doing so, continuous promotion decouples the CI and CD processes, allowing each to focus on its core responsibilities without overextension. ... Introducing a systematic step between CI and CD ensures that only qualified artifacts progress through the pipeline, reducing the risk of faulty deployments. This approach allows the implementation of detailed rule sets, which can include criteria such as successful test completions, manual approvals or compliance checks. As a result, continuous promotion provides greater control over the deployment process, enabling teams to automate complex decision-making processes that would otherwise require manual intervention.


CIOs listen up: either plan to manage fast-changing certificates, or fade away

Even when organizations finally decide to set policies and standardize security for new deployments, mitigating the existing deployments is a huge effort, and in the modern stack, there’s no dedicated operations team, he says. That makes it more important for CIOs to take ownership of the problem, Cairns points out. “Especially in larger, more complex and global organizations, the magnitude of trying to push these things through the organization is often underestimated,” he says. “Some of that is having a good handle on the culture and how to address these things in terms of messaging, communications, enforcement of the right policies and practices, and making sure you’ve got the proper stakeholder buy-in at the various points in this process — a lot of governance aspects.” ... Many large organizations will soon need to revoke and reprovision TLS certificates at scale. One in five Fortune 1000 companies use Entrust as their certificate authority, and from November 1, 2024, Chrome will follow Firefox in no longer trusting TLS certificates from Entrust because of a pattern of compliance failures, which the CA argues were, ironically, sometimes caused by enterprise customers asking for more time to deal with revocation. 


Effortless Concurrency: Leveraging the Actor Model in Financial Transaction Systems

In a financial transaction system, the data flow for handling inbound payments involves multiple steps and checks to ensure compliance, security, and accuracy. However, potential failure points exist throughout this process, particularly when external systems impose restrictions or when the system must dynamically decide on the course of action based on real-time data. ... Implementing distributed locks is inherently more complex, often requiring external systems like ZooKeeper, Consul, Hazelcast, or Redis to manage the lock state across multiple nodes. These systems need to be highly available and consistent to prevent the distributed lock mechanism from becoming a single point of failure or a bottleneck. ... In this messaging based model, communication between different parts of the system occurs through messages. This approach enables asynchronous communication, decoupling components and enhancing flexibility and scalability. Messages are managed through queues and message brokers, which ensure orderly transmission and reception of messages. ... Ensuring message durability is crucial in financial transaction systems because it allows the system to replay a message if the processor fails to handle the command due to issues like external payment failures, storage failures, or network problems.


Hundreds of LLM Servers Expose Corporate, Health & Other Online Data

Flowise is a low-code tool for building all kinds of LLM applications. It's backed by Y Combinator, and sports tens of thousands of stars on GitHub. Whether it be a customer support bot or a tool for generating and extracting data for downstream programming and other tasks, the programs that developers build with Flowise tend to access and manage large quantities of data. It's no wonder, then, that the majority of Flowise servers are password-protected. ... Leaky vector databases are even more dangerous than leaky LLM builders, as they can be tampered with in such a way that does not alert the users of AI tools that rely on them. For example, instead of just stealing information from an exposed vector database, a hacker can delete or corrupt its data to manipulate its results. One could also plant malware within a vector database such that when an LLM program queries it, it ends up ingesting the malware. ... To mitigate the risk of exposed AI tooling, Deutsch recommends that organizations restrict access to the AI services they rely on, monitor and log the activity associated with those services, protect sensitive data trafficked by LLM apps, and always apply software updates where possible.


Generative AI vs. Traditional AI

Traditional AI, often referred to as “symbolic AI” or “rule-based AI,” emerged in the mid-20th century. It relies on predefined rules and logical reasoning to solve specific problems. These systems operate within a rigid framework of human-defined guidelines and are adept at tasks like data classification, anomaly detection, and decision-making processes based on historical data. In sharp contrast, generative AI is a more recent development that leverages advanced ML techniques to create new content. This form of AI does not follow predefined rules but learns patterns from vast datasets to generate novel outputs such as text, images, music, and even code. ... Traditional AI relies heavily on rule-based systems and predefined models to perform specific tasks. These systems operate within narrowly defined parameters, focusing on pattern recognition, classification, and regression through supervised learning techniques. Data fed into these models is typically structured and labeled, allowing for precise predictions or decisions based on historical patterns. In contrast, generative AI uses neural networks and advanced ML models to produce human-like content. This approach leverages unsupervised or semi-supervised learning techniques to understand underlying data distributions.



Quote for the day:

"Opportunities don't happen. You create them." -- Chris Grosser

Daily Tech Digest - August 12, 2024

In three or four years, ‘we won’t even talk about AI’

In general, there’s a very positive view of AI in tech. In a lot of other industries, there’s some uncertainty, some trepidation, some curiosity. But part of our pulse survey said about three out of four tech workers are using AI on a daily basis. So, the adoption in this portfolio of companies is higher than most, and I’d also said most employers and workers have a very good idea that AI is going to improve their business and their work. ... “I view AI skills as adjacent, additive skills for most people — aside from really hardcore data scientists and AI engineers. This is how most people will work in the new world. Generally, it depends. Some organizations have built whole, distinct AI organizations. Others have built embedded AI domains in all of their job functions. It really depends. There’s a lot of discussion around whether companies should have a chief AI officer. I’m not sure that’s necessary. I think a lot of those functions are already in place. You do need someone in your organization who has a holistic view of the positive sides of this and the risks associated with this.”


The AI Balancing Act: Innovating While Safeguarding Consumer Privacy

There are two sides to every coin. While AI can further compliance efforts, it can also create new privacy and security challenges. This is particularly true today, amid an ongoing global effort to strengthen data privacy laws. 71% of countries have data privacy legislation, and in recent years, this has evolved to encapsulate AI. In the EU, for instance, approval has been secured from the European Parliament around a specific AI regulatory framework. This framework imposes specific obligations on providers of high-risk AI systems and could ban certain AI-powered applications. The fact is, AI-powered technology is immensely powerful. But, it comes with complex challenges to data privacy compliance. A primary concern here relates to purpose limitation, specifically the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. As AI systems evolve, they may find new ways to utilise data, potentially extending beyond the scope of original disclosure and consent agreement. As such, maintaining transparency in AI operations to ensure accurate and appropriate data use disclosures is critical.


Is biometric authentication still effective?

With the rapid advancement and accessibility of technologies, the efficacy and security of biometric authentication methods are under threat. Fraudsters are using spoofing techniques to replicate or falsify biometric data, such as creating synthetic fingerprints or 3D facial models, to fool sensors, mimic legitimate biometric traits and gain unauthorized access to secured services. ... Unlike traditional biometric authentication, which relies on static physical attributes, behavioral biometrics verify user identity based on unique interaction patterns, such as typing rhythm, mouse movements and touchscreen interactions. This shift is essential because behavioral biometrics offer a more dynamic and adaptive layer of security, making it significantly harder for fraudsters to replicate or mask. ... With data scattered across different systems, it’s challenging to correlate information, connect the dots and identify overarching patterns of bad behavior. A decentralized approach causes businesses to overlook crucial fraud indicators and struggle to respond effectively to emerging threats due to the lack of visibility and coordination among disparate fraud prevention tools.


Practical strategies for mitigating API security risks

Identity and access management is crucial for a complete API security strategy. IAM facilitates efficient user management from creation to deactivation and ensures that only authorized individuals access APIs. IAM enables granular access control, granting permissions based on specific attributes and resources rather than just predefined roles. Integration with security information and event management (SIEM) systems enhances security by providing centralized visibility and enabling better threat detection and response. AI and machine learning are revolutionizing API security by providing sophisticated tools that enhance design, testing, threat detection, and overall governance. These technologies improve the robustness and resilience of APIs, enabling organizations to stay ahead of emerging threats and regulatory changes. As AI evolves, its role in API security will become increasingly vital, offering innovative solutions to the complex challenges of safeguarding digital assets. AI in API security goes beyond the limitations of human or rule-based interventions, enabling advanced pattern recognition and automating security audits and governance for greater defense against evolving threats.


The evolution of the CTO – from tech keeper to strategic leader

CTOs have experienced a huge shift in how they are positioned in the workplace. They are no longer part of a small-medium size team that operates separately from the rest of the business; they are the key to tangible business growth and perhaps one of the most crucial parts of a leadership team. The main duty of CTOs is to maintain – and where available, to modernise – tech, and to decide when something has kicked the bucket and no longer has a purpose. These things require people power, specialist skills and money. Needless to say, the investment in the role is vital. Tech leaders often feel burnt out, or worried that they don’t have the resources and support needed to do their job well. ... The saying goes, “You can never set foot in the same river twice,” and the same is true for leaders in tech – everything evolves from the moment you start working on a project. There is much to appreciate about technology that remains stable and adaptable when changes are necessary during development. Today, innovative CTOs are on the lookout for software solutions that come with the flexibility of making that important U-turns if ever needed.


How AIOps Is Transforming IT Operations Management

IT operations management has become increasingly challenging as networks have become larger and more complex, with the introduction of remote workers and the distribution of applications and workloads across networks. Traditional operations management tools and practices struggle to keep up with the ever-growing volumes of data from multiple sources within complex and varied network environments. AIOps was designed to bring the speed, accuracy and predictive capabilities of AI technology to IT operations. AIOps provides contextually enriched, deep end-to-end, real-time insights that can be proactively acted upon, according to Forrester. AIOps solutions use real-time telemetry, developing patterns and historical operational data to perform real-time assessments of what is happening, whether it has happened before or not, what paths it might take, and what negative effects it might have on business operations. ... A "digitally mature" organization has a much better ROI on the AI investment. But because this is a "rolling target" and not static, an organization's IT infrastructure "must be able to adapt and change," Ramamoorthy said.


The cyber assault on healthcare: What the Change Healthcare breach reveals

Many security leaders report that they don’t have adequate resources to implement the needed security measures because they’re often competing with pricey life-saving medical equipment for the limited funds available to spend, Kim says. Furthermore, he says their complex technology environments can make applying and creating security in depth not only more challenging but more costly, too. That, in turn, makes it less likely for CISOs to get the resources they need. Security teams in healthcare also have more challenges in updating and patching systems, Riggi explains, as the sector’s need for 24/7 availability means organizations can’t easily go offline — if they can go offline at all — to perform needed work. Healthcare security leaders also have a rapidly expanding tech environment to secure, as both more partners and more patients with remote medical devices become part of the sector’s already highly interconnected environment, says Errol S. Weiss, chief security officer at Health-ISAC. Such expansion heightens the challenges, complexities and costs of implementing security controls as well as heightening the risks that a successful attack against one point in that web would impact many others.


Solar Power Installations Worldwide Open to Cloud API Bugs

"The issue we discovered lies in the cloud APIs that connect the hardware with the user," both on Solarman's platform and on Deye Cloud, says Bogdan Botezatu, director of threat Research and reporting at Bitdefender. "These APIs have vulnerable endpoints that allow an unauthorized third party to change settings or otherwise control the inverters and data loggers via the vulnerable Solarman and Deye platforms," he says. Bitdefender, for instance, found that the Solarman platform's /oauth2-s/oauth/token API endpoint would let an attacker generate authorization tokens for any regular or business accounts on the platform. "This means that a malicious user could iterate through all accounts, take over any of them and modify inverter parameters or change how the inverter interacts with the grid," Bitdefender said in its report. The security vendor also found Solarman's API endpoints to be exposing an excessive amount of information — including personally identifiable information — about organizations and individuals on the platform. 


6 hard truths of generative AI in the enterprise

“Not a week goes by without another new tool that is mind-blowing in its abilities and potential future impact,’’ agrees David Higginson, chief innovation officer and executive vice president of Phoenix Children’s Hospital. But right now genAI “can really only be executed by a small number of technology giants rather than being tinkered with at a local skunkworks level within a healthcare organization,’’ he says. “Therefore, it feels as if we are in a bit of a paused state waiting for established vendors to deliver mature solutions that can provide the tangible value we all anticipated.” ... The fundamental barriers to adopting genAI are the scarcity and cost of the hardware, power, and data needed to train models, Higginson says. “With such scarcity comes the need to prioritize which solutions have the broadest appeal to the population and can generate the most long-term revenue,’’ he says. ... While research and development continue to push the needle on what genAI can do, “we know that data is a critical aspect to enabling AI solutions and we also recognize that many organizations are uncovering the work it will take to build the right data foundations to support scaled AI deployments,” says Deloitte’s Rowan.


Investing in Capacity to Adapt to Surprises in Software-Reliant Businesses

A well-known and contrarian adage in the Resilience Engineering community is that Murphy's Law - "anything that can go wrong, will" - is wrong. What can go wrong almost never does, but we don't tend to notice that. People engaged in modern work (not just software engineers) are continually adapting what they’re doing, according to the context they find themselves in. They’re able to avoid problems in most everything they do, almost all of the time. When things do go "sideways" and an issue crops up they need to handle or rectify, they are able to adapt to these situations due to the expertise they have. Research in decision-making described in the article Seeing the invisible: Perceptual-cognitive aspects of expertise by Klein, G. A., & Hoffman, R. R. (2020) reveals that while demonstrations of expertise play out in time-pressured and high-consequence events (like incident response), expertise comes from experience with facing varying situations involved with "ordinary" everyday work. It is "hidden" because the speed and ease with which experts do ordinary work contrasts with how sophisticated the work is. 



Quote for the day:

"True leadership must be for the benefit of the followers, not the enrichment of the leaders." -- Robert Townsend

Daily Tech Digest - April 04, 2024

Transforming CI/CD Pipelines for AI-Assisted Coding: Strategies and Best Practices

Most source code management tools, including Git, support tagging features that let developers apply labels to specific snippets of code. Teams that adopt AI-assisted coding should use these labels to identify code that was generated wholly or partially by AI. This is an important part of a CI/CD strategy because AI-generated code is, on the whole, less reliable than code written by a skilled human developer. For that reason, it may sometimes be necessary to run extra tests on AI-generated code — or even remove it from a codebase in the event that it triggers unexpected bugs. ... Along similar lines, some teams may find it valuable to deploy extra tests for AI-generated code during the testing phase of their CI/CD pipelines, both to ensure software quality and to catch any vulnerable code or dependencies that AI introduces into a codebase. Running those tests is likely to result in a more complex testing process because there will be two sets of tests to manage: those that apply only to AI-generated code, and those that apply to all code. Thus, the testing stage of CI/CD is likely to become more complicated for teams that use AI tools.


Revolutionising Regulatory Compliance: AI & ML Powering The Future Of Financial Governance

The use of technology is quickly transforming how businesses handle compliance challenges. AI helps by automating tasks like monitoring and reporting. It quickly finds new regulatory requirements in a sea of information and ensures adherence by the organisation. Machine learning, a type of AI, is good at spotting patterns and unusual things, which is important for following rules. By looking at historical data, it can predict possible risks, so companies can deal with them early. Compliance officers can use AI tools to do routine tasks, handle hard problems, and be more open with regulators. AI’s smart systems make compliance work smoother and more accurate. Looking forward, AI’s contribution to compliance seems promising. Predictive compliance management, powered by AI, will move from reacting to problems to spotting risks early, which could save companies from legal trouble. Real-time monitoring and personalised solutions for each company will become common, making compliance easier and better. Also, AI will work with other new technologies like blockchain and IoT to improve compliance.


Codium announces Codiumate, a new AI agent that seeks to be Devin for enterprise software development

Codium hopes that Codiumate will aid developers in their workflow, speeding up all the manual typing they would otherwise have to do, doing the “heavy lifting” and mechanical coding work, while enabling the developer to act more as a hands-on product manager overseeing the process and course correcting it as necessary, almost as though it is a junior developer or new hire to the team. The technology powering the Codiumate agent on the backend is “best of breed” OpenAI models, according to Friedman. The company is also experimenting with Anthropic’s Claude and Google’s Gemini. It also offers its own large language model (LLM) designed with its AlphaCodium process that increases the performance of other LLMs in code completion tasks. While the former is available to all users, the latter Codium LLM is only for paying enterprise users. Friedman said it is superior to OpenAI’s models on coding and that a “Fortune 10” company that could not be named due to confidentiality reasons was already using it in production.


Healthcare’s cyber resilience under siege as attacks multiply

Every healthcare organization must ensure employees are well aware of and trained about potential threats. It’s critical to ensure they understand how to navigate and evaluate everything that comes in. One requirement could be to only open emails from known senders or to only open attachments if they are secure. Many organizations’ security teams will conduct resilience tests and distribute suspicious-looking emails to see which employees will click it. Modern spam filters are relatively adept at weeding out risky emails, but anyone with an inbox knows that many get through to end users. Most employers issue computers and devices, allowing for secured settings maintained by IT departments. It’s important to keep access and logins only to those devices and not on any personal devices, which are typically much easier attack points to enter a system. Maintaining robust security settings on issued machines is especially important if the employee will be working from remote locations, including at home, where network security tends to not be as robust as within enterprises.


Instilling the Hacker Mindset Organizationwide

Visibility is a foundational principle that suggests you can't secure what you don't know about. Lack of a security team's visibility is a gold rush for hackers because they typically infiltrate an organization's network via hidden or sneaky entry points. If you don't have visibility, there will undoubtedly be a way in. Without visibility into all traffic within an organization's infrastructure, threat actors can continue to lurk in the network and grant themselves access to the organization's most sensitive data. With 93% of malware hiding behind encrypted traffic but only 30% of security professionals claiming to have visibility, it's no wonder that there were more ransomware attacks in the first half of 2023 than in all of 2022. Once a cybercriminal has made their way into the network, time is limited. Only with visibility can the cybercriminal be stopped from wreaking havoc and gaining access to company data. When cybersecurity professionals can better understand the mysterious nature of hackers and how they work, they can better protect their own systems and valuable customer data. It's critical to stay vigilant not only when it comes to major security issues, but also with minor lags in security best practice.


Separating the signals from the noise in tech evolution

With technology trends extensively covered across all forms of media, IT leaders often get questions or advice from well-meaning senior colleagues on what trends to adopt. However, not every trend warrants immediate attention or even playing catch-up if you’re late to the party. Wise leaders often opt to be “smart laggards” who focus on adopting and scaling the trends that really matter to their organizations. And they focus on demonstrating value quickly or stopping pilots or initiatives that are not delivering. ... In the current environment of uncertainty, marked by persistent macroeconomic challenges, global fragmentation, and growing cybersecurity challenges, tech leaders shared their perspectives on risks and resilience. More than one described reinventing the technology function and its value proposition in times of crisis, taking a “through-cycle mindset”: pushing forward in times of crisis rather than retrenching, and focusing on long-term value creation to help the company emerge stronger when conditions change. We also discussed how dashboards should balance short- to mid-term KPIs with long-term value delivery.


Navigating risks in AI governance – what have we learned so far?

In the face of a regulatory void, several entities have taken it upon themselves to establish their own standards aimed at tackling the core issues of model transparency, explainability, and fairness. Despite these efforts, the call for a more structured approach to govern AI development, mindful of the burgeoning regulatory landscape, remains loud and clear. ... However, an AI Risk and Security (AIRS) group survey reveals a notable gap between the need for governance and its actual implementation. Only 30% of enterprises have delineated roles or responsibilities for AI systems, and a scant 20% boast a centrally managed department dedicated to AI governance. This discrepancy underscores the burgeoning necessity for comprehensive governance tools to assure a future of trustworthy AI. ... The patchwork of regulatory approaches across the globe reflects the diverse challenges and opportunities presented by AI-driven decisions. The United States, for example, saw a significant development in July 2023 when the Biden administration announced that major tech firms would self-regulate their AI development, underscoring a collaborative approach to governance.


Unlocking Personal and Professional Growth: Insights From Incident Management

The skills and lessons gained from Incident Management are highly transferable to various aspects of life. For instance, adaptability is crucial not only in responding to technical issues but also in adapting to changes in personal circumstances or professional environments. Teamwork teaches collaboration, conflict resolution, and empathy, which are essential in building strong relationships both at work and in personal life. Problem-solving skills honed during incident response can be applied to tackle challenges in any domain, from planning a project to resolving conflicts. Resilience, the ability to bounce back from setbacks, is a valuable trait that helps individuals navigate through adversity with determination and a positive mindset. Continuous improvement is a mindset that encourages individuals to seek feedback, reflect on experiences, identify areas for growth, and strive for excellence. This attitude of continuous learning and development not only benefits individuals in their careers but also contributes to personal fulfillment and satisfaction.


How to build a developer-first company

Providing a great developer experience—by enabling our customers to easily add auth flows and user management to their apps—leads to a great end-user experience as the customer’s customers seamlessly and securely log in.This kind of virtuous cycle exists at many developer-focused companies. When building a successful developer-first business, it’s critical to tie together the similarities between the customer experience and the developer experience while clearly delineating the differences. ... When helping developers build their customer experience, we emphasize building onboarding and authentication flows with the best user experience in mind. That includes reducing friction, like the use of passwordless methods and progressive profiling, and creating an embedded in-app native experience to avoid needless redirections or pop-ups. Our developer experience includes an onboarding wizard that sets up their project and login flows in a few clicks. We offer a drag-and-drop visual workflow editor to easily create and customize their customer journey. We also provide robust documentation, code snippets, SDKs, tutorials, and a Slack community for troubleshooting. 


How to fix the growing cybersecurity skills gap

Organizations looking to upskill their cybersecurity professionals should consider adjusting and reorganizing key workflows to give the entire security team — aside from just the CISO — ample time to research emerging threats and remain up to date on what the ramifications of these threats may be. By automating repetitive tasks for these team members or restructuring key processes and timelines, the entire team, from CISO to analyst, can have more time to dedicate towards staying ahead of industry trends and cyber-attacks, ultimately strengthening the organization’s ability to detect and respond to threats in the long run. Giving employees time and space to be curious and explore the latest threat intelligence, commentary and insight — including topic-based tabletop exercises or red teaming — will yield significant dividends in understanding the organization's true security posture and preparedness. In today’s cybersecurity landscape, companies must strive to be a learning-forward organization. Tangible adoption of this principle must go beyond formal skills and training — every encounter your teams have with a threat or an attack is a learning opportunity.



Quote for the day:

"Though no one can go back and make a brand new start, anyone can start from now and make a brand new ending." -- Carl Band