Daily Tech Digest - September 28, 2024

IoT devices will be the catalyst for the 4th industrial revolution

The impact of IoT on product quality is not just reactive but also proactive. IoT-enabled traceability systems ensure that every component of a product can be tracked from its origin to the final assembly, ensuring full compliance with industry standards and regulations. Plus, automated systems can monitor and adjust energy usage in real-time, leading to more efficient operations that lower the overall carbon footprint of a facility. By minimizing energy waste, companies will contribute to a more sustainable environment while also realizing substantial cost savings. These savings can be reinvested into research and development, driving innovation and enhancing product quality. In return, compliance eliminates unnecessary product waste and energy consumption, which then lowers the final cost for consumers while heightening brand reputation. ... By combining the real-time data collection capabilities of IoT devices with AI-driven analytics, IoT technologies can be leveraged to enable the seamless integration of clean energy sources into industrial operations. Solar, wind, and other renewable energy sources can be efficiently managed through smart grids and automated systems that balance the energy load, ensuring that clean energy is utilized to its fullest potential. 


Hackers Weaponizing PDF Files To Deliver New SnipBot Malware

They exploit the all-presence and trustworthiness of PDFs to trick victims into opening malicious files that can contain malicious links, embedded code, or vulnerabilities that allow remote code execution. Security experts at Palo Alto Networks identified recently that hackers have been actively weaponizing PDF files to deliver new SnipBot malware. ... While the SnipBot employs a multi-stage infection process that begins with a signed executable which is disguised as a “PDF.” This uses the anti-sandbox techniques like “checking process names” and “registry entries.” To evade the detection the malware makes use of “Window message-based control-flow obfuscation” and “encrypted strings.” Besides this, it downloads additional payloads like a DLL that injects code into Explorer.exe through “COM hijacking.” The core functionality of SnipBot includes ‘a backdoor (single.dll)’ that creates a “SnipMutex” and enables threat actors to ‘execute commands,’ ‘upload/download files,’ and ‘deploy extra modules.’ ... As the SnipBot, various evasion techniques, payload delivery methods, and post-infection capabilities compromise systems and exfiltrate sensitive data.


Novel Exploit Chain Enables Windows UAC Bypass

Despite the potential for privilege escalation, Microsoft refused to accept the issue as a vulnerability. After Fortra reported it, the company responded by pointing to the "non-boundaries" section of the Microsoft Security Servicing Criteria for Windows, which outlines how "some Windows components and configurations are explicitly not intended to provide a robust security boundary." ... Reguly and Fortra disagree with Microsoft's perspective. "When UAC was introduced, I think we were all sold on the idea that UAC was this great new security feature, and Microsoft has a history of fixing bypasses for security features," he says. "So if they're saying that this is a trust boundary that is acceptable to traverse, really what they're saying to me is that UAC is not a security feature. It's some sort of helpful mechanism, but it's not actually security related. I think it's a really strong philosophical difference." ... Philosophical differences aside, Reguly stresses that businesses need to be aware of the risk in allowing lower-integrity admins to escalate their privileges to attain full system controls.


How factories are transforming their operations with AI

One of the key end goals for the integration of AI in manufacturing is the establishment of 'lights-out factories' which means fully automating everything within the factory environment so that there is minimal to zero need for human input. Such is the lack of a need for human intervention that you can effectively manage the production process with the lights turned off. FANUC is one example of a company that operates a lights-out factory in Japan to build its robots, having done so since 2001. The company makes 50 robots for every 24-hour shift, according to the Association for Manufacturing Technology, with the factory running unsupervised for up to 30 days without human input. Automotive manufacturing is another sector in which AI has been a major positive influence. BMW's AIQX automates certain quality control processes by using sensor technology and AI. Algorithms analyze the data they record in real time and they send employees feedback immediately. It can quickly detect anomalies on the assembly line. Similarly, Rolls Royce has melded data analytics with AI, pulling in masses of data from in-service engines in real time and feeding this into digital twins. 


Beyond encryption: Hidden dangers in the wake of ransomware incidents

One of the most insidious threats in the post-ransomware landscape is the potential presence of multiple threat actors within a compromised environment. This scenario, while relatively rare, can have devastating consequences for victim organizations. The root of this problem often lies in the cyber incident ecosystem itself, particularly in the use of initial access brokers (IABs) by ransomware groups. These IABs, motivated by profit, may sell access to the same compromised network to multiple malicious actors. The result can be a perfect storm of cyber activity, with different groups vying for control of the same systems. ... Another vector for multiple-actor intrusions comes from an unexpected source: the tools used by information security professionals themselves. Malvertising campaigns have become increasingly sophisticated, targeting legitimate software distribution channels to spread compromised versions of popular security tools. Ironically, the very applications designed to protect systems can become Trojan horses for malicious actors. ... The complexity of modern cyber threats underscores the necessity of comprehensive forensic analysis following any security incident.


Prioritize Robust Engineering Over Overblown GenAI Promises

Beyond tackling data quality and scalability concerns, this necessary shift towards engineering innovation will lead to developing tools and frameworks that better support AI workflows, including handling large volumes of unstructured data (including images and videos). That, in turn, will foster a more collaborative and integrated approach between AI and data management practices. As the AI and data stacks complement each other, we can expect more cohesive and innovative solutions that address AI implementation’s technical and operational challenges. ... This maturation process promises substantial benefits beyond the realm of developers and engineers. Just as the dot-com bubble burst led to the refinement and widespread adoption of internet technologies, the current focus on data curation and engineering in AI will pave the way for transformative applications across various industries. Imagine AI-powered healthcare diagnostics that rely on meticulously curated data sets or financial systems that leverage AI for predictive analytics to manage risks more effectively. These advancements aren’t just about enhancing technical capabilities; they’re about improving outcomes for society as a whole.


IT leaders weigh up AI’s role to improve data management

“The important thing in data management is having a solid disaster recovery plan,” says Macario. “In fact, security for an NGO like ours is both a cyber and physical problem because not only are we the target of attacks, but we operate in war zones, where the services provided aren’t always reliable and, in the event of failures, hardware replacement parts are difficult to find.” Innovative encryption and geographic data backup technologies are applied, in particular immutable cloud technology that protects against ransomware. These are supported by AI for endpoint protection. User identities are also managed on the Azure Entra ID platform, which has integrated AI and warns of suspicious activity in real time. ... “We turned to the big technology players to solve the problem and the LLM algorithms led to a turning point, because they allowed us to carry out the analyses,” says Macario. “These are used by our Medical Division departments to analyze access to care and improve quality, obtain statistics, create an archive, and understand what instruments, drugs, and doctors we need in a war context. The data form a scientific basis on which to base our intervention and our ability to report the effects of war on civilian populations.”


Is it possible to save money and run on a public cloud?

In the early days of cloud computing, big providers promoted the migration of applications and data to the cloud without modification or modernization. The advice was to fix it when it got there, not before. Guess what? Workloads were never fixed or modernized. These lift-and-shift applications and data consumed about three times the resources enterprises thought they would. This led to a disenchantment with public cloud providers, even though enterprises also bore some responsibility. ... High cloud costs usually stem from the wrong cloud services or tools, flawed application load estimates, and developers who designed applications without understanding where the cloud saves money. You can see this in the purposeful use of microservices as a base architecture. ... The key to winning this war is planning. You’ll need good architecture and engineering talent to find the right path. This is probably the biggest reason we haven’t gone down this road as often as we should. Enterprises can’t find the people needed to make these calls; it’s hard to find that level of skill. Cloud providers can also be a source of help. Many have begun to use the “O word” (optimization) and understand that to keep their customers happy, they need to provide some optimization guidance. 


Beyond Compliance: Leveraging Security Audits for Enhanced Risk Management

One of the most effective ways to approach risk management in an organization is through a comprehensive security audit. Security audits objectively assess layers of an organization’s security controls, established system and operational policies, and various document procedures. Rather than simply passing or failing a defined list of compliance protocols, a security audit examines all elements of an organization’s security posture. This includes looking for potential weak points in connected networks and systems and finding areas which may be useful but could be improved. ... Security auditing processes can also be built into the organization’s disaster recovery initiatives. As the business tests its incident response protocols throughout the year, pairing this process with a formal audit helps the organization to be better prepared to respond more effectively to operational disruptions. However, the benefits of a security audit aren’t just associated with minimizing operational risks. This proactive security approach can also play an impactful role when demonstrating the organization’s commitment to their customer’s data privacy.


Security, AIOps top mainframe customer challenges

“The increased prioritization of AIOps reflects surging interest in the implementation of emerging technologies on the mainframe. Those reporting the adoption of AIOps on the mainframe increased [9%] from the 2023 BMC Mainframe Survey, while 76% of respondents reported the use of generative AI [genAI] in their organizations,” McKenney wrote. “The power of AI/ML and genAI open a new world of possibility in IT management. Organizations are leveraging these technologies throughout their IT ecosystems to gain real-time insight into security postures, automate issue resolution, gain critical business insight, and onboard and train new personnel,” McKenney wrote. ... Its BMC AMI Platform will feature the BMC AMI Assistant, a chat-based, AI-powered assistant available for developers, operators, system programmers, and IT managers to use for real-time explanations, support, and automation, the company stated. “Whether help is needed to debug code, understand system processes, or make informed decisions and take actions, the BMC AMI Assistant will provide expert guidance instantly, enhancing productivity and reducing downtime. Users will leverage BMC AMI Assistant Tools to capture their local knowledge and integrate it seamlessly into the BMC AMI Assistant,” McKenny wrote in a BMC blog.



Quote for the day:

"The only way to achieve the impossible is to believe it is possible." -- Charles Kingsleigh

Daily Tech Digest - September 27, 2024

What happens when everybody winds up wearing ‘AI body cams’?

The first body cams were primitive. They were enormous, had narrow, 68-degree fields of view, had only 16GB of internal storage, and had batteries that lasted only four hours. Body cams now usually have high-resolution sensors, GPS, infrared for low-light conditions, and fast charging. They can be automatically activated through Bluetooth sensors, weapon release, or sirens. They use backend management systems to store, analyze, and share video footage. The state of the art — and the future of the category — is multimodal AI. ... Using such a system in multimodal AI, a user could converse with their AI agent, asking questions about what the glasses were pointed at previously. These glasses will almost certainly have a dashcam-like feature where video is constantly recorded and deleted. Users can push a button to capture and store the past 30 seconds or 30 minutes of video and audio — basically creating an AI body cam worn on the face. Smart glasses will be superior to body cams, and over time, AI body cams for police and other professionals will no doubt be replaced by AI camera glasses. This raises the question: When everybody has AI body cams — specifically glasses with AI body cam functionality — nwhat does society then look like?


Aligning Cloud Costs With Sustainability and Business Goals

AI is poised for democratization, similar to the cloud. Users will have the choice and ability to use multiple models for numerous use cases. Future trends indicate a rise in culturally aware and industry-specific models that will further facilitate the democratization of AI. Singapore's National Research Foundation launched AI Singapore - a national program to enhance the country's AI capabilities - to make its LLMs more culturally accurate, localized and tailored to Southeast Asia. AWS is working with Singapore public organizations to develop innovative, industry-first solutions powered by AI and gen AI, including AI Singapore's SEA-LION. Building on AWS' scalable compute infrastructure, SEA-LION is a family of LLMs that is specifically pre-trained and instruct-tuned for Southeast Asian languages and cultures. WS released the Amazon Bedrock managed service to support gen AI deployments for large enterprises. It now provides easy access to multiple large language models and foundation models from AI21 Labs, Anthropic, Cohere, Meta and Stability AI through a single API, along with a broad set of capabilities organizations need to build gen AI applications with security, privacy and responsible AI.


Fortifying the Weakest Link: How to Safeguard Against Supply Chain Cyberattacks

Failures in systems and processes by third parties can lead to catastrophic reputational and operational damage. It is no longer sufficient to merely implement basic vendor management procedures. Organizations must also take proactive measures to safeguard against third-party control failures. ... Protect administrative access to the tools and applications used by DevOps teams. Enable secure application configuration via secrets and authenticate applications and services with high confidence. Mandate that software suppliers certify and extend security controls to cover microservices, cloud, and DevOps environments. ... Ensure that your systems and those of your suppliers are regularly updated and patched for known vulnerabilities. Prevent the use of unsupported or outdated software that could introduce new vulnerabilities. ... Configure cloud environments to reject authorization requests involving tokens that deviate from accepted norms. For on-premises systems, follow the National Security Agency’s guidelines by deploying a Federal Information Processing Standards (FIPS)-validated Hardware Security Module (HSM) to store token-signing certificate private keys. HSMs significantly reduce the risk of key theft by threat actors.


Are hardware supply chain attacks “cyber attacks?”

In the case of hardware supply chain attacks, malicious actors infiltrate the supply of devices, or the physical manufacturing process of pieces of hardware and purposefully build in security flaws, faulty parts, or backdoors they know they can take advantage of in the future, such as malicious microchips on a circuit board. For Cisco’s part, the Cisco Trustworthy technologies program, including secure boot, Cisco Trust Anchor module (TAm), and runtime defenses give customers the confidence that the product is genuinely from Cisco. As I was thinking about the threat of hardware supply chain attacks, I was left wondering who, exactly, should be tasked with solving this problem. And I think I’ve decided the onus falls on several different sectors. It shouldn’t just be viewed as a cybersecurity issue, because for a hardware supply chain attack, an adversary would likely need to physically infiltrate or tamper with the manufacturing process. Entering a manufacturing facility or other stops along the logistics chain would require some level of network-level manipulation, such as faking a card reader or finding a way to trick physical defenses — that’s why Cisco Talos Incident Response looks for these types of things in Purple Team exercises.


How The Digital Twin Helps Build Resilient Manufacturing Operations

The digital twin is a sophisticated tool. It must be a true working virtual replica of the physical asset. Anything short of that means problems. To make it all work, consider several key aspects. You will most likely need multiple digital twins of the same physical asset. At least one digital twin should be online most of the time, collecting data from the real world. Other copies of the digital twin might be offline at times, but they use the real-world data in various training situations and for optimizing the equipment and the line. Getting data from the real world into the digital twin is one of the best and most common uses for the Industrial Internet of Things (IIoT). The latest digital twins are incorporating AI to help optimize the design process, learn from previous designs and create new equipment designs. AI helps create operator training scenarios and optimizes the equipment and production line. AI learns from the optimization process and, even with new wrinkles thrown into the real world, learns how to optimize the optimization process. It helps troubleshoot the equipment, finding problems quickly, long before they become problems.


3 tips for securing IoT devices in a connected world

Comprehensive visibility refers to an organization’s ability to identify, monitor and remotely manage each individual device connected to its network. Gaining this level of visibility is a crucial first step for maintaining a robust security posture and preventing unauthorized access or potential breaches. ... Addressing common vulnerabilities like built-in backdoors and unpatched firmware is essential for maintaining the security of connected devices. Built-in backdoors are hidden or undocumented access points in a device’s software or firmware that allow unauthorized access to the device or its network. These backdoors are often left by manufacturers for maintenance or troubleshooting purposes but can be exploited by attackers if not properly secured. ... One important step in secure deployment is limiting access to critical resources using network segmentation. Network segmentation involves dividing a network into smaller, isolated segments or subnets, each with its own security controls. This practice limits the movement of threats across the network, reducing the risk of a compromised IoT device leading to a broader security breach. 


Why countries are in a race to build AI factories in the name of sovereign AI

“The number of sovereign AI clouds is really quite significant,” Huang said in the earnings call. He said Nvidia wants to enable every company to build its own custom AI models. The motivations weren’t just about keeping a country’s data in local tech infrastructure to protect it. Rather, they saw the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation, said Colette Kress, CFO of Nvidia, in the earnings call. That was around the time when the Biden administration was restricting sales of the most powerful AI chips to China, requiring a license from the U.S. government before shipments could happen. That licensing requirement is still in effect. As a result, China reportedly began its own attempts to create AI chips to compete with Nvidia’s. But it wasn’t just China. Kress also said Nvidia was working with the Indian government and its large tech companies like Infosys, Reliance and Tata to boost their “sovereign AI infrastructure.” Meanwhile, French private cloud provider Scaleway was investing in regional AI clouds to fuel AI advances in Europe as part of a “new economic imperative,” Kress said. 


Is Spring AI Strong Enough for AI?

While the Spring framework itself does not have a dedicated AI library, it has proven to be an effective platform for developing AI-driven systems when combined with robust AI/ML frameworks. Spring Boot and Spring Cloud provide essential capabilities for deploying AI/ML models, managing REST APIs, and orchestrating microservices, all of which are crucial components for building and deploying production-ready AI systems. ... Spring, typically known as a versatile enterprise framework, showcases its effectiveness in high-quality AI deployments when combined with its robust scalability, security, and microservice architecture features. Its seamless integration with machine learning models, especially through REST APIs and cloud infrastructure, positions it as a formidable choice for enterprises seeking to integrate AI with intricate business systems. Nevertheless, for more specialized tasks such as model versioning, training orchestration, and rapid prototyping, AI-specific frameworks like TensorFlow Serving, Kubernetes, and MLflow offer tailored solutions that excel in high-performance model serving, distributed AI workflows, and streamlined management of the complete machine learning lifecycle with minimal manual effort.


Top Skills Chief AI Officers Must Have to Succeed in Modern Workplace

Domain knowledge is obviously vital. Possessing an understanding of core AI concepts is a must. Machine learning (ML), data analytics, and software development are elementary requirements a capable CAIO will leverage for specific business goals. Given the incipient stage that AI transformation is at, candidates will have to supplement their knowledge with continuous learning, adaptability, and initiative. Notably, a CAIO must use their expertise to arrive at data-driven decisions—it sets a good professional apart and highlights their capacity to troubleshoot accurately. ... A CAIO must translate AI concepts into clear strategies, prioritizing among multiple potential implementations based on their judgment of what will deliver the greatest value. This involves setting concrete goals such as improved efficiency, enhanced customer engagement, or increased employee productivity, and devising a roadmap to achieve them. ... Beyond the technical knowledge and strategic acumen, a powerful grasp of how business processes work within an organisation and why they function the way they do is crucial. CAIOs must foremost align with this culture and find ways to integrate AI within that framework.


5 Ways to Keep Global Development Teams Productive

A significant challenge for global development teams is ensuring smooth collaboration between different locations. Without the right tools and processes, team members can experience delays due to time zone differences, slow data access, or inconsistent version control systems. To improve collaboration, development teams should implement systems that provide fast, reliable access to codebases, regardless of location. Real-time collaboration tools that synchronize work across global teams are essential. For instance, platforms that replicate repositories in real-time across different sites ensure that all team members are working with the latest version of the code, reducing the risk of inconsistencies. ... Compliance with data protection laws, such as the GDPR or CCPA, is also essential for companies working across borders. Development teams need to be mindful of where data is stored and ensure that their tools meet the necessary compliance requirements. Security policies should be applied consistently across all locations to prevent breaches and data leaks, which can lead to significant financial and reputational damage.



Quote for the day:

“Without continual growth and progress, such words as improvement, achievement, and success have no meaning.” -- Benjamin Franklin

Daily Tech Digest - September 25, 2024

When technical debt strikes the security stack

“Security professionals are not immune from acquiring their own technical debt. It comes through a lack of attention to periodic reviews and maintenance of security controls,” says Howard Taylor, CISO of Radware. “The basic rule is that security rapidly decreases if it is not continuously improved. The time will come when a security incident or audit will require an emergency collection of the debt.” ... The paradox of security technical debt is that many departments concurrently suffer from both solution debt that causes gaps in coverage or capabilities, as well as rampant tool sprawl that eats up budget and makes it difficult to effectively use tools. ... “Detection engineering is often a large source of technical debt: over the years, a great detection engineering team can produce many great detections, but the reliability of those detections can start to fade as the rest of the infrastructure changes,” he says. “Great detections become less reliable over time, the authors leave the company, and the detection starts to be ignored. This leads to waste of energy and very often cost.” ... Role sprawl is another common scenario that contributes significantly to security debt, says Piyush Pandey, CEO at Pathlock.


Google Announces New Gmail Security Move For Millions

From the Gmail perspective, the security advisor will include a security sandbox where all email attachments will be scanned for malicious software employing a virtual environment to safely analyze said files. Google said the tool can “delay message delivery, allow customization of scan rules, and automatically move suspicious messages to the spam folder.” Gmail also gets enhanced safe browsing which gives additional protection by scanning incoming messages for malicious content before it is actually delivered. ... A Google spokesperson told me that the AI Geminin app is to get enterprise-grade security protections in core services now. With availability from October 15, for customers running on a Workspace Business, Enterprise, or Frontline plan, Google said that “with all of the core Workspace security and privacy controls in place, companies have the tools to deploy AI securely, privately and responsibly in their organizations in the specific way that they want it.” The critical components of this security move include ensuring Gemini is subject to the same privacy, security, and compliance policies as the rest of the Workspace core services, such as Gmail and Dos. 


The Next Big Interconnect Technology Could Be Plastic

e-Tube technology is a new, scalable interconnect platform that uses radio wave transmission over a dielectric waveguide made of – drumroll – common plastic material such as low-density polyethylene (LDPE). While waveguide theory has been studied for many years, only a few organizations have applied the technology for mainstream data interconnect applications. Because copper and optical interconnects are historically entrenched technologies, most research has focused on extending copper life or improving energy and cost efficiency of optical solutions. But now there is a shift toward exploring the e-Tube option that delivers a combination of benefits that copper and optical cannot, including energy-efficiency, low latency, cost-efficiency and scalability to multi-terabit network speeds required in next-gen data centers. The key metrics for data center cabling are peak throughput, energy efficiency, low latency, long cable reach and cost that enables mass deployment. Across these metrics, e-Tube technology provides advantages compared to copper and optical technologies. Traditionally, copper-based interconnects have been considered an inexpensive and reliable choice for short-reach data center applications, such as top-of-rack switch connections. 


From Theory to Action: Building a Strong Cyber Risk Governance Framework

Setting your risk appetite is about more than just throwing a number out there. It’s about understanding the types of risks you face and translating them into specific, measurable risk tolerance statements. For example, “We’re willing to absorb up to $1 million in cyber losses annually but no more.” Once you have that in place, you’ll find decision-making becomes much more straightforward. ... If your current cybersecurity budget isn't sufficient to handle your stated risk appetite, you may need to adjust it. One of the best ways to determine if your budget aligns with your risk appetite is by using loss exceedance curves (LECs). These handy charts allow you to visualize the forecasted likelihood and impact of potential cyber events. They help you decide where to invest more in cybersecurity and perhaps where even to cut back. ... One thing that a lot of organizations miss in their cyber risk governance framework is the effective use of cyber insurance. Here's the trick: cyber insurance shouldn’t be used to cover routine losses. Doing so will only lead to increased premiums. Instead, it should be your safety net for the larger, more catastrophic incidents – the kinds that keep executives awake at night.


Is Prompt Engineering Dead? How To Scale Enterprise GenAI Adoption

If you pick a model that is a poor fit for your use case, it will not be good at determining the context of the question and will fail at retrieving a reference point for the response. In those situations, the lack of reference data needed for providing an accurate response contributes to a hallucination. While there are many situations where you would prefer the model to give no response at all rather than fabricate one, what happens if there is no exact answer available is that the model will take some data points that it thinks are contextually relevant to the query and return an inaccurate answer. ... To leverage LLMs effectively at an enterprise scale, businesses need to understand their limitations. Prompt engineering and RAG can improve accuracy, but LLMs must be tightly limited in domain knowledge and scope. Each LLM should be trained for a specific use case, using a specific dataset with data owners providing feedback. This ensures no chance of confusing the model with information from different domains. The training process for LLMs differs from traditional machine learning, requiring human oversight and quality assurance by data owners.


AI disruption in Fintech: The dawn of smarter financial solutions

Financial institutions face diverse fraud challenges, from identity theft to fund transfer scams. Manual analysis of countless daily transactions is impractical. AI-based systems are empowering Fintechs to analyze data, detect anomalies, and flag suspicious activities. AI is monitoring transactions, filtering spam, and identifying malware. It can recognise social engineering patterns and alert users to potential threats. While fraudsters also use AI for sophisticated scams, financial institutions can leverage AI to identify synthetic content and distinguish between trustworthy and untrustworthy information. ... AI is transforming fintech customer service, enhancing retention and loyalty. It provides personalised, consistent experiences across channels, anticipating needs and offering value-driven recommendations. AI-powered chatbots handle common queries efficiently, allowing human agents to focus on complex issues. This technology enables 24/7 support across various platforms, meeting customer expectations for instant access. AI analytics predict customer needs based on financial history, transaction patterns, and life events, enabling targeted, timely offers. 


CIOs Go Bold

In business, someone who is bold is an individual who exudes confidence and assertiveness and is business savvy. However, there is a fine line between being assertive and confident in a way that is admired and being perceived as overbearing and hard to work with. ... If your personal CIO goals include being bolder, the first step is for you to self-assess. Then, look around. You probably already know individuals in the organization or colleagues in the C-suite who are perceived as being bold shakers and movers. What did they do to acquire this reputation? ... To get results from the ideas you propose, the outcomes of your ideas must solve strategic goals and/or pain points in the business. Consequently, the first rule of thumb for CIOs is to think beyond the IT box. Instead, ask questions like how an IT solution can help solve a particular challenge for the business. Digitalization is a prime example. Early digitalization projects started out with missions such as eliminating paper by digitalizing information and making it more searchable and accessible. Unfortunately, being able to search and access data was hard to quantify in terms of business results. 


What does the Cyber Security and Resilience Bill mean for businesses?

The Bill aims to strengthen the UK’s cyber defences by ensuring that critical infrastructure and digital services are secure by protecting those services and supply chains. It’s expected to share common ground with NIS2 but there are also some elements that are notably absent. These differences could mean the Bill is not quite as burdensome as its European counterpart but equally, it runs the risk of making it not as effective. ... The problem now is that many businesses will be looking at both sets of regulations and scratching their heads in confusion. Should they assume that the Bill will follow the trajectory of NIS2 and make preparations accordingly or should they assume it will continue to take a lighter touch and one that may not even apply to them? There’s no doubt that NIS2 will introduce a significant compliance burden with one report suggesting it will cost upwards of 31.2bn euros per year. Then there’s the issue of those that will need to comply with both sets of regulations i.e. those entities that either supply to customers or have offices on the continent. They will be looking for the types of commonalities we’ve explored here in order to harmonise their compliance efforts and achieve economies of scale. 


3 Key Practices for Perfecting Cloud Native Architecture

As microservices proliferate, managing their communication becomes increasingly complex. Service meshes like Istio or Linkerd offer a solution by handling service discovery, load balancing, and secure communication between services. This allows developers to focus on building features rather than getting bogged down by the intricacies of inter-service communication. ... Failures are inevitable in cloud native environments. Designing microservices with fault isolation in mind helps prevent a single service failure from cascading throughout the entire system. By implementing circuit breakers and retry mechanisms, organizations can enhance the resilience of their architecture, ensuring that their systems remain robust even in the face of unexpected challenges. ... Traditional CI/CD pipelines often become bottlenecks during the build and testing phases. To overcome this, modern CI/CD tools that support parallel execution should be leveraged. ... Not every code change necessitates a complete rebuild of the entire application. Organizations can significantly speed up the pipeline while conserving resources by implementing incremental builds and tests, which only recompile and retest the modified portions of the codebase.


Copilots and low-code apps are creating a new 'vast attack surface' - 4 ways to fix that

"In traditional application development, apps are carefully built throughout the software development lifecycle, where each app is continuously planned, designed, implemented, measured, and analyzed," they explain. "In modern business application development, however, no such checks and balances exists and a new form of shadow IT emerges." Within the range of copilot solutions, "anyone can build and access powerful business apps and copilots that access, transfer, and store sensitive data and contribute to critical business operations with just a couple clicks of the mouse or use of natural language text prompts," the study cautions. "The velocity and magnitude of this new wave of application development creates a new and vast attack surface." Many enterprises encouraging copilot and low-code development are "not fully embracing that they need to contextualize and understand not only how many apps and copilots are being built, but also the business context such as what data the app interacts with, who it is intended for, and what business function it is meant to accomplish."



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - September 24, 2024

Effective Strategies for Talking About Security Risks with Business Leaders

Like every difficult conversation, CISOs must pick the right time, place and strategy to discuss cyber risks with the executive team and staff. Instead of waiting for the opportunity to arise, CISOs should proactively engage with individuals at all levels of the organization to influence them and ensure an understanding of security policies and incident response. These conversations could come in the form of monthly or quarterly meetings with senior stakeholders to maintain the cadence and consistency of the conversations, discuss how the threat landscape is evolving and review their part of the business through a cybersecurity lens. They could also be casual watercooler chats with staff members, which not only help to educate and inform employees but also build vital internal relationships that can affect online behaviors. In addition to talking, CISOs must also listen to and learn about key stakeholders to tailor conversations around their interests and concerns. ... If you're talking to the board, you'll need to know the people around that table. What are their interests, and how can you communicate in a way that resonates with them and gets their attention? Use visualization techniques and find a "cyber ally" on the board who will back you and help reinforce your ideas and the information you share.


Is Explainable AI Explainable Enough Yet?

“More often than not, the higher the accuracy provided by an AI model, the more complex and less explainable it becomes, which makes developing explainable AI models challenging,” says Godbole. “The premise of these AI systems is that they can work with high-dimensional data and build non-linear relationships that are beyond human capabilities. This allows them to identify patterns at a large scale and provide higher accuracy. However, it becomes difficult to explain this non-linearity and provide simple, intuitive explanations in understandable terms.” Other challenges are providing explanations that are both comprehensive and easily understandable and the fact that businesses hesitate to explain their systems fully for fear of divulging intellectual property (IP) and losing their competitive advantage. “As we make progress towards more sophisticated AI systems, we may face greater challenges in explaining their decision-making processes. For autonomous systems, providing real-time explainability for critical decisions could be technically difficult, even though it will be highly necessary,” says Godbole. When AI is used in sensitive areas, it will become increasingly important to explain decisions that have significant ethical implications, but this will also be challenging.


The challenge of cloud computing forensics

Data replication across multiple locations complicates forensics processes that require the ability to pinpoint sources for analysis. Consider the challenge of retrieving deleted data from cloud systems—not just a technical obstacle, but a matter of accountability that is often not addressed by IT until it’s too late. Multitenancy involves shared resources among multiple users, making it difficult to distinguish and segregate data. This is a systemic problem for cloud security, and it is particularly problematic for cloud platform forensics. The NIST document acknowledges this challenge and recommends the implementation of access mechanisms and frameworks so companies can maintain data integrity and manage incident response. Thus, the mechanisms are in place to deal with issues once they occur because accounting happens on an ongoing basis. The lack of location transparency is a nightmare. Data resides in various physical jurisdictions, all with different laws and cultural considerations. Crimes may occur on a public cloud point of presence in a country that disallows warrants to examine the physical systems, whereas other countries have more options for law enforcement. Guess which countries the criminals choose to leverage.


Is the rise of genAI about to create an energy crisis?

Though data center power consumption is expected to double by 2028, according to IDC research director Sean Graham, it’s still a small percentage of overall energy consumption — just 18%. “So, it’s not fair to blame energy consumption on AI,” he said. “Now, I don’t mean to say AI isn’t using a lot of energy and data centers aren’t growing at a very fast rate. Data Center energy consumption is growing at 20% per year. That’s significant, but it’s still only 2.5% of the global energy demand. “It’s not like we can blame energy problems exclusively on AI,” Graham said. ... Beyond the pressure from genAI growth, electricity prices are rising due to supply and demand dynamics, environmental regulations, geopolitical events, and extreme weather events fueled in part by climate change, according to an IDC study published today. IDC believes the higher electricity prices of the last five years are likely to continue, making data centers considerably more expensive to operate. Amid that backdrop, electricity suppliers and other utilities have argued that AI creators and hosts should be required to pay higher prices for electricity — as cloud providers did before them — because they’re quickly consuming greater amounts of compute cycles and, therefore, energy compared to other users.


20 Years in Open Source: Resilience, Failure, Success

The rise of Big Tech has emphasized one of the most significant truths I’ve learned: the need for digital sovereignty. Over time, I’ve observed how centralized platforms can slowly erode consumers’ authority over their data and software. Today, more than ever, I believe that open source is a crucial path to regaining control — whether you’re an individual, a business, or a government. With open source software, you own your infrastructure, and you’re not subject to the whims of a vendor deciding to change prices, terms, or even direction. I’ve learned that part of being resilient in this industry means providing alternatives to centralized solutions. We built CryptPad — to offer an encrypted, privacy-respecting alternative to tools like Google Docs. It hasn’t been easy, but it’s a project I believe in because it aligns with my core belief: people should control their data. I would improve the way the community communicates the benefits of open source. The conversation all too frequently concentrates on “free vs. paid” software. In reality, what matters is the distinction between dependence and freedom. I’ve concluded that we need to explain better how individuals may take charge of their data, privacy, and future by utilizing open source.


20 Tech Pros On Top Trends In Software Testing

The shift toward AI-driven testing will revolutionize software quality assurance. AI can intelligently predict potential failures, adapt to changes and optimize testing processes, ensuring that products are not only reliable, but also innovative. This approach allows us to focus on creating user experiences that are intuitive and delightful. ... AI-driven test automation has been the trend that almost every client of ours has been asking for in the past year. Combining advanced self-healing test scripts and visual testing methodologies has proven to improve software quality. This process also reduces the time to market by helping break down complex tasks. ... With many new applications relying heavily on third-party APIs or software libraries, rigorous security auditing and testing of these services is crucial to avoid supply chain attacks against critical services. ... One trend that will become increasingly important is shift-left security testing. As software development accelerates, security risks are growing. Integrating security testing into the early stages of development—shifting left—enables teams to identify vulnerabilities earlier, reduce remediation costs and ensure secure coding practices, ultimately leading to more secure software releases.


How to manage shadow IT and reduce your attack surface

To effectively mitigate the risks associated with shadow IT, your organization should adopt a comprehensive approach that encompasses the following strategies:Understanding the root causes: Engage with different business units to identify the pain points that drive employees to seek unauthorized solutions. Streamline your IT processes to reduce friction and make it easier for employees to accomplish their tasks within approved channels, minimizing the temptation to bypass security measures. Educating employees: Raise awareness across your organization about the risks associated with shadow IT and provide approved alternatives. Foster a culture of collaboration and open communication between IT and business teams, encouraging employees to seek guidance and support when selecting technology solutions. Establishing clear policies: Define and communicate guidelines for the appropriate use of personal devices, software, and services. Enforce consequences for policy violations to ensure compliance and accountability. Leveraging technology: Implement tools that enable your IT team to continuously discover and monitor all unknown and unmanaged IT assets. 


How software teams should prepare for the digital twin and AI revolution

By integrating AI to enhance real-time analytics, users can develop a more nuanced understanding of emerging issues, improving situational awareness and allowing them to make better decisions. Using in-memory computing technology, digital twins produce real-time analytics results that users aggregate and query to continuously visualize the dynamics of a complex system and look for emerging issues that need attention. In the near future, generative AI-driven tools will magnify these capabilities by automatically generating queries, detecting anomalies, and then alerting users as needed. AI will create sophisticated data visualizations on dashboards that point to emerging issues, giving managers even better situational awareness and responsiveness. ... Digital twins can use ML techniques to monitor thousands of entry points and internal servers to detect unusual logins, access attempts, and processes. However, detecting patterns that integrate this information and create an overall threat assessment may require data aggregation and query to tie together the elements of a kill chain. Generative AI can assist personnel by using these tools to detect unusual behaviors and alert personnel who can carry the investigation forward.


The Open Source Software Balancing Act: How to Maximize the Benefits And Minimize the Risks

OSS has democratized access to cutting-edge technologies, fostered a culture of collaboration and empowered businesses to prioritize innovation. By tapping into the vast pool of open source components available, software developers can accelerate product development, minimize time-to-market and drive innovation at scale. ... Paying down technical debt requires two things, consistency and prioritization. First, organizations should opt for fewer high-quality suppliers with well-maintained open source projects because they have greater reliability and stability, reducing the likelihood of introducing bugs or issues into their own codebase that rack up tech debt. In terms of transparency, organizations must have complete visibility into their software infrastructure. This is another area where SBOMs are key. With an SBOM, developers have full visibility into every element of their software, which reduces the risk of using outdated or vulnerable components that contribute to technical debt. There’s no question that open source software offers unparalleled opportunities for innovation, collaboration and growth within the software development ecosystem. 


Is AI really going to burn the planet?

Trying to understand exactly how energy-intensive the training of datasets is, is even more complex than understanding exactly how big data center GHG sins are. A common “AI is environmentally bad” statistic is that training a large language model like GPT-3 is estimated to use just under 1,300-megawatt hours (MWh) of electricity, about as much power as consumed annually by 130 US homes, or the equivalent of watching 1.63 million hours of Netflix. The source for this stat is AI company Hugging Face, which does seem to have used some real science to arrive at these numbers. It also, to quote a May Hugging Face probe into all this, seems to have proven that "multi-purpose, generative architectures are orders of magnitude more [energy] expensive than task-specific systems for a variety of tasks, even when controlling for the number of model parameters.” It’s important to note that what’s being compared here are task-specific AI runs (optimized, smaller models trained in specific generative AI tasks) and multi-purpose (a machine learning model that should be able to process information from different modalities, including images, videos, and text).



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls

Daily Tech Digest - September 23, 2024

Clear as mud: global rules around AI are starting to take shape but remain a little fuzzy

There is some subjectivity within the EU efforts, as “high risk” is defined as able to cause harm to society, which could receive wildly different interpretations. That said, the effort comes from the right place, which is to protect and ensure the “fundamental rights of EU citizens.” The EU Council views the act as designed to stimulate investment and innovation, while at the same time, carving out exceptions for “military and defense as well as research purposes.” This perspective is not much different from the one the industry offered up in 2022 before the US Senate during discussions on the challenges of security, cybersecurity in the age of AI. At that hearing, two years ago, the Senate was urged not to stifle innovation as adversaries and economic competitors in other nations were not going to be slowing down their innovation. ... When I asked Price for his thoughts on the US position around global AI that many nations should work together to ensure safety without hampering evolution, he agreed that “security considerations must remain at the forefront of these discussions to ensure that widespread AI adoption does not inadvertently amplify cybersecurity risks.”


Turning Compliance Into Strategy: 4 Tips For Navigating AI Regulation

For Chief Strategy Officers (CSOs), helping their organizations to understand and adapt to AI regulation is essential. CSOs can play a key role in guiding their organizations to turn compliance into strategy ... Establish effective governance frameworks that align with the AI Act’s requirements. This framework should include clear policies on data usage, transparency, accountability and ethical AI practices, as well as implementing AI-driven technologies, to help manage risks. Additionally, developing a governance structure that includes roles and responsibilities for AI oversight, and working with operational leaders to embed governance practices into day-to-day business operations can support the company’s long-term success and ethical reputation. ... Companies that form strategic partnerships are better positioned to stay competitive in the market, helping them navigate regulations like the AI Act. By combining the unique strengths of each partner, business leaders can develop more robust and scalable solutions that are better equipped to handle the nuances of regulations. ... The EU AI Act marks a significant shift in the regulatory landscape, challenging businesses to rethink how they develop and deploy AI technologies. 


‘Harvest now, decrypt later’: Why hackers are waiting for quantum computing

The “harvest now, decrypt later” phenomenon in cyberattacks — where attackers steal encrypted information in the hopes they will eventually be able to decrypt it — is becoming common. As quantum computing technology develops, it will only grow more prevalent. ... The average hacker will not be able to get a quantum computer for years — maybe even decades — because they are incredibly costly, resource-intensive, sensitive and prone to errors if they are not kept in ideal conditions. To clarify, these sensitive machines must stay just above absolute zero (459 degrees Fahrenheit to be exact) because thermal noise can interfere with their operations. However, quantum computing technology is advancing daily. Researchers are trying to make these computers smaller, easier to use and more reliable. Soon, they may become accessible enough that the average person can own one. ... The Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST) soon plan to release post-quantum cryptographic standards. The agencies are leveraging the latest techniques to make ciphers quantum computers cannot crack. 


AI-driven demand forecasting ensures we’re ‘game-ready’ by predicting user behaviour and traffic

At Dream Sports, AI and machine learning are central to enhancing user experiences, optimising predictions, and securing our platform. AI-driven demand forecasting ensures we’re “game-ready” by predicting user behaviour and traffic for smooth gameplay during peak times. With over 250 million users, our ML systems safeguard platform integrity, detecting and preventing violations to ensure fair play. We also leverage ML to personalise user experiences, optimise rewards programs, and use causal inference for data-driven decisions across game recommendations and contest management. Generative AI initiatives include developing an AI Coach and enhancing user verification and customer success systems. Our collaboration with Columbia University’s Dream Sports AI Innovation Centre advances AI/ML applications in sports, focusing on predictive modelling, fan engagement, and sports tech optimisation. This partnership, alongside internal initiatives, helps us lead in reshaping sports technology with more immersive, personalised experiences through the rise of generative AI.


5 things your board needs to know about innovation and thought leadership

The most successful organizations have a programmatic approach to managing innovation and thought leadership, which helps them build organizational competency over time in both disciplines. How it’s structured is less important since it can be centralized, decentralized, or hybrid, but having a defined program with a mission, vision, strategy, and operating plan at a minimum is critical. As an example, the US Navy set a vision for 2030 related to the future of naval information warfare, creating a Hollywood-produced video, which became a north star for the organization, unlocking millions in funding for AI. The focus and types of innovation and thought leadership you pursue are important, too. In addition to an internal and client-facing focus, have a known set of innovation enablers you plan to pursue such as data and analytics, automation, adaptability, cloud, digital twins and AI, but be open to adding others as needed. The same is true for your editorial calendar for thought leadership and the topics you plan to address. And hear out new thought leadership topics that may come from left field, which could benefit customers. In addition, keep the board appraised on your multi-year innovation journey, goals and objectives. 


Cloud Security Risk Prioritization is Broken. Here’s How to Fix It.

Business context is critical. It’s easy to understand, for example, a CVE in a payment application is a high priority. Whereas, the same CVE in a search application is low priority. Security programs must also take this into account. Effective security paradigms understand which detected vulnerabilities have the greatest business impact, so security teams aren’t spending time prioritizing lower-risk vulnerabilities. Traditional security applications run tests on code before it’s pushed. While this pre-production testing is still a best practice, it misses how code interacts with the environmental variables, configurations, and sensitive data it will coexist with once deployed. This insight is essential when you’re working to understand how a cloud-native application will function when live. Technologies such as application security posture management (ASPM) facilitate a more proactive approach by automating security review processes in production and creating a live view of an application, its vulnerabilities, and business risks. ASPM provides visibility into what’s happening in the cloud, giving security teams a better understanding of application behavior and attack surfaces so they can prioritize appropriately. 


A Look Inside the World of Ethical Hacking to Benefit Security

While there can be many different siloes and areas of focus within the ethical hacking community, enterprises tend to interact with these experts in a few different ways. Penetration testing is a common connection between enterprises and ethical hackers, often one driven by compliance requirements. Larger, more mature organizations may employ penetration testers internally in addition to contracting with third parties. While many organizations rely solely on third parties. Enterprises may also engage ethical hackers to participate in red teaming exercises, simulations of real-world attacks. Typically, these exercises have a specific objective, and ethical hackers are free to use whatever means available to achieve that objective. Hannan offers a physical security assessment as an example of a red teaming exercise. “Walk into a building, find an unlocked computer, and plug a USB device into the computer,” he details. “That might be one of your objectives. How do you get into the building? Do you impersonate a delivery person? Do you impersonate an HVAC person? Do you just show up in a yellow vest and a hard hat and walk into the building? That's left up to you.”


Offensive cyber operations are more than just attacks

AI is already transforming offensive cyber operations by expanding data visibility and streamlining threat intelligence, which are critical for both defensive and offensive purposes. AI enables faster decision-making and the ability to predict and respond to threats more effectively. However, it also empowers adversaries, allowing for more sophisticated attacks which could include generating deepfakes, designing advanced malware, and spreading misinformation at an unprecedented scale on social media platforms. Quantum computing, while still in its early stages, poses a significant long-term challenge. Its potential to break current encryption methods could render many of today’s cybersecurity practices obsolete, creating new vulnerabilities for exploitation. ... A key limitation is time. Once a threat is identified, the race to harden systems and close vulnerabilities begins. The longer it takes to respond, the more risk organizations face. As threats become more sophisticated, defenders must continuously adapt and anticipate new methods of attack, making speed, agility, and proactive defense critical factors in minimizing exposure and mitigating risk.


Quantum Risks Pose New Threats for US Federal Cybersecurity

Adversaries including China are investing heavily in quantum computing in an apparent effort to outpace the United States, where bureaucratic red tape and unforeseen costs could significantly hinder federal efforts to keep up. "Upgrading this infrastructure isn’t going to be quick or cheap," said Georgianna Shea, chief technologist of the Foundation for Defense of Democracies' Center on Cyber and Technology Innovation. Testing for quantum-resistant encryption could reveal compatibility issues with legacy systems, such as increased power demands, reduced performance, larger key sizes and the need to adjust existing protocols and application stacks for keys and digital signatures, she told Information Security Media Group. The Foundation for Defense of Democracies is set to release new guidance for CIOs on Monday that will aim to lay out a road map for quantum readiness. The report is structured as a six-point plan that includes designating a leader, taking inventory of all encryption systems, prioritizing based on risk, understanding mitigation strategies, developing a transition plan and regularly monitoring and adjusting it as needed.


The Rise of Generative AI Fuels Focus on Data Quality

Traditionally, data quality initiatives have often been isolated efforts, disconnected from core business goals and strategic initiatives. Some data quality initiatives are compliance-focused, data cleaning, or departmental efforts — all are very important but not directly tied to larger business goals. This makes it difficult to quantify the impact of data quality improvements and secure the necessary investment. As a result, data quality struggles to gain the crucial attention it deserves. However, the rise of GenAI presents a game-changer for enterprises. GenAI apps rely heavily on high-quality data to generate accurate and reliable results. ... Organizations need a new way to organize the data and make it GenAI-ready, making sure it is continuously synced with the source systems, continuously cleansed according to a company's data quality policies, and continuously protected. But the solution extends beyond technology. Organizations must prioritize data quality by establishing key performance indicators (KPIs) directly linked to GenAI success, such as customer satisfaction, resolution rate, and response time.



Quote for the day:

“If you want to make a permanent change, stop focusing on the size of your problems and start focusing on the size of you!” -- T. Harv Eker

Daily Tech Digest - September 22, 2024

Cloud Exit: 42% of Companies Move Data Back On-Premises

Agarwal said: ‘Nobody is running a cloud business as a charity.’ When businesses reach a size where it is economically viable, constructing their own infrastructure can save significant costs while eliminating the ‘cloud middleman’ and associated expenses. That said, the cloud is certainly not “Just someone else’s computer,” as the joke goes. It has added immense value to those who adapted to it. But like artificial intelligence (AI), it has been mythologized and exaggerated as the ultimate tool for efficiency — romanticized to the point where pervasive myths about cost-effectiveness, reliability, and security are enough for businesses to dive headfirst into adoption. These myths are frequently discussed in high-profile forums, shaping perceptions that may not always align with reality, leading many to commit without fully considering potential drawbacks and real-world challenges. ... Avoidable charges and cloud waste were another noteworthy issue revealed in the 2023 State of Cloud Strategy Survey by Hashicorp. 94% of respondents in this survey reported incurring unnecessary expenses because of the underutilization of cloud resources. These costs often result from maintaining idle resources that do not cater to any of the company’s actual operational needs. 


Revitalize aging data centers

Before tackling the specifics of upgrading a data center, it is important to conduct a thorough assessment to identify the specific needs and areas for improvement. This assessment should examine the data center's existing infrastructure, including server capacity, storage solutions, and energy consumption. It is also important to evaluate how these elements stack up against current power standards, grid connection requirements, efficiency benchmarks, and environmental and permit regulations. By benchmarking against newer facilities, operators can identify key areas where technological and infrastructural enhancements are needed. ... While integrating the latest server technologies might seem obvious, these systems demand different support from existing infrastructure. The increased computational loads should not compromise system reliability. Therefore, transitioning to newer generations of processors can result in updates of your data center support infrastructure. This includes upgrading power distribution units (PDUs) to handle higher power densities, enhancing network infrastructure to support faster data transfer rates, and reinforcing structural components to accommodate the increased weight and space requirements of modern equipment.


Personhood: Cybersecurity’s next great authentication battle as AI improves

Although intriguing, the personhood plan has fundamental issues. First, credentials are very easily faked by gen AI systems. Second, customers may be hard-pressed to take the significant time and effort to gather documents and wait in line at a government office to prove that they are human simply to visit public websites or sales call centers. Some argue that the mass creation of humanity cookies would create another pivotal cybersecurity weak spot. “What if I get control of the devices that have the humanity cookie on it?” FaceTec’s Meier asks. “The Chinese might then have a billion humanity cookies at one person’s control.” Brian Levine, a managing director for cybersecurity at Ernst & Young, believes that, while such a system might be helpful in the short run, it likely won’t effectively protect enterprises for long. “It’s the same cat-and-mouse game” that cybersecurity vendors have always played with attackers, Levine says. ... Sandy Carielli, a Forrester principal analyst and lead author of the Forrester bot report, says a critical element of any bot defense program is to not delay good bots, such as legitimate search engine spiders, in the quest to block bad ones.“The crux of any bot management system has to be that it never introduces friction for good bots and certainly not for legitimate customers. 


What’s behind the return-to-office demands?

The effect is clear: an average employee wants to work three days a week in the office, while managers want them there four days. The managers win, of course: today half of all civil servants in Stockholm County work in the office four days a week, a clear increase. There are different conclusions one can draw. Mine are these: Physical workplaces and physical interaction are better than digital workspaces and meetings when it comes to creative tasks and social/cultural togetherness. I think, depending on what you work with, employees and managers are quite in agreement. Leadership in the hybrid work models has not developed in the ways and at the pace required. Managers still have an excessive need for control, with no way to deal with this without trying to return to what was previously comfortable. Employees have probably not managed to convey to their bosses the positive aspects of home work — for the employer. It’s great that your life puzzle is easier and you can take power walks and do laundry, but how does that help the company? It’s no wonder that whispering about sneaky vacations is taking off. And there’s an elephant in the room we should talk about — people really hate open office spaces and activity-based workplaces.


Passwordless AND Keyless: The Future of (Privileged) Access Management

Because SSH keys are functionally different from passwords, traditional PAMs don't manage them very well. Legacy PAMs were built to vault passwords, and they try to do the same with keys. Without going into too much detail about key functionality (like public and private keys), vaulting private keys and handing them out at request simply doesn't work. Keys must be secured at the server side, otherwise keeping them under control is a futile effort. Furthermore, your solution needs to discover keys first to manage them. Most PAMs can't. There are also key configuration files and other key(!) elements involved that traditional PAMs miss. ... Let's come back to the topic of passwords. Even if you have them vaulted, you aren't managing them in the best possible way. Modern, dynamic environments - using in-house or hosted cloud servers, containers, or Kubernetes orchestration - don't work well with vaults or with PAMs that were built 20 years ago. This is why we offer modern ephemeral access where the secrets needed to access a target are granted just-in-time for the session, and they automatically expire once the authentication is done. This leaves no passwords or keys to manage - at all.


Cybersecurity is Beyond Protecting Personal Data

Cyberattacks are not just about stealing personal data; they also involve stealing intellectual property and sensitive corporate information. In India, the number of data breaches has surged in recent years. The Indian Computer Emergency Response Team (CERT-IN) reported over 150,000 cyber incidents in 2023 alone, with significant breaches occurring in sectors such as finance, healthcare, and government. ... While there is a global scarcity of competent cybersecurity personnel, India is experiencing an exceptionally severe shortfall. A report conducted by (ISC)² indicates that there is a 3 million cybersecurity workforce shortage worldwide, with India contributing significantly to this shortfall. This deficiency hinders businesses' capacity to detect and address cyber threats that should be looked after by team members' ignorance and lack of training might lead to human mistakes, which are a common way for cyberattacks to get started. ... Compliance with cybersecurity legislation and standards is critical for data protection and retaining confidence. India's legal landscape is changing, with initiatives like the Information Technology Act and the Personal Data Protection Bill aimed at improving cybersecurity. 


Google calls for halting use of WHOIS for TLS domain verifications

TLS certificates are the cryptographic credentials that underpin HTTPS connections, a critical component of online communications verifying that a server belongs to a trusted entity and encrypts all traffic passing between it and an end user. ... The rules for how certificates are issued and the process for verifying the rightful owner of a domain are left to the CA/Browser Forum. One "base requirement rule" allows CAs to send an email to an address listed in the WHOIS record for the domain being applied for. When the receiver clicks an enclosed link, the certificate is automatically approved. ... Specifically, watchTowr researchers were able to receive a verification link for any domain ending in .mobi, including ones they didn’t own. The researchers did this by deploying a fake WHOIS server and populating it with fake records. Creation of the fake server was possible because dotmobiregistry.net—the previous domain hosting the WHOIS server for .mobi domains—was allowed to expire after the server was relocated to a new domain. watchTowr researchers registered the domain, set up the imposter WHOIS server, and found that CAs continued to rely on it to verify ownership of .mobi domains.


How API Security Fits into DORA Compliance: Everything You Need to Know

Financial institutions rely heavily on third-party service providers, and APIs are the gateway through which many of these vendors access core banking systems. This introduces significant risk, as third-party APIs may become the weakest link in the supply chain. DORA places substantial emphasis on managing these risks, as outlined in Article 28, stating that financial entities must ensure that third-party providers “implement and maintain appropriate measures to manage ICT risks" and that institutions must "ensure the quality and integration of ICT services provided by third parties." You need to start simple and to be able to answer two questions: Who are your vendors? What third-party apps do you have connected? One of the biggest challenges here is the concept of shadow APIs—those untracked, unauthorized, or forgotten endpoints that can remain active long after their intended purpose. Shadow APIs expose financial institutions to vulnerabilities, making it difficult to track and control third-party access. DORA’s Article 28 further reinforces the need for financial institutions to "assess third-party ICT service providers’ ability to protect the integrity, security, and confidentiality of data, and to manage risks related to outsourcing."


Dirty code still runs, and that’s not a good thing

Quality code benefits developers by minimizing the time and effort spent on patching and refactoring later. Having confidence that code is clean also enhances collaboration, allowing developers to more easily reuse code from colleagues or AI tools. This not only simplifies their work but also reduces the need for retroactive fixes and helps prevent and lower technical debt. To deliver clean code, it’s important to note that developers should start with the right guardrails, tests, and analysis from the beginning, in the IDE. Pairing unit testing with static analysis can also guarantee quality. The sooner these reviews happen in the development process, the better. ... Developers and businesses can’t afford to perpetuate the cycle of bad code and, consequently, subpar software. Pushing poor-quality code through to development will only reintroduce software that breaks down later, even if it seems to run fine in the interim. To end the cycle, developers must deliver software built on clean code before deploying it. By implementing effective reviews and tests that gatekeep bad code before it becomes a major problem, developers can better equip themselves to deliver software with both functionality and longevity. 


The Perfect Balance: Merging AI and Design Thinking for Innovative Pricing Strategies

This combination of AI’s optimization and Design Thinking’s creative transformation is exactly what modern businesses need to stay competitive. Relying solely on AI to adjust pricing may lead to efficiency gains, but without the innovation brought by Design Thinking, businesses risk missing out on new opportunities to reshape their pricing models and align them more closely with customer needs. Conversely, while Design Thinking can spark innovation, without AI’s precision, companies might struggle to implement their ideas in a way that maximizes profitability. It is by uniting these two approaches that organizations can build pricing strategies that are both efficient and forward-looking. For businesses, pricing is a powerful lever that influences profitability, market position, and customer perception. In today’s competitive landscape, those that fail to leverage both AI and Design Thinking risk falling behind. AI offers the operational benefits of real-time optimization, driving immediate financial returns. Design Thinking provides the creative space to explore new value propositions and pricing structures that can secure long-term customer loyalty. 



Quote for the day:

"A sense of humor is part of the art of leadership, of getting along with people, of getting things done." -- Dwight D. Eisenhower