Daily Tech Digest - September 25, 2024

When technical debt strikes the security stack

“Security professionals are not immune from acquiring their own technical debt. It comes through a lack of attention to periodic reviews and maintenance of security controls,” says Howard Taylor, CISO of Radware. “The basic rule is that security rapidly decreases if it is not continuously improved. The time will come when a security incident or audit will require an emergency collection of the debt.” ... The paradox of security technical debt is that many departments concurrently suffer from both solution debt that causes gaps in coverage or capabilities, as well as rampant tool sprawl that eats up budget and makes it difficult to effectively use tools. ... “Detection engineering is often a large source of technical debt: over the years, a great detection engineering team can produce many great detections, but the reliability of those detections can start to fade as the rest of the infrastructure changes,” he says. “Great detections become less reliable over time, the authors leave the company, and the detection starts to be ignored. This leads to waste of energy and very often cost.” ... Role sprawl is another common scenario that contributes significantly to security debt, says Piyush Pandey, CEO at Pathlock.


Google Announces New Gmail Security Move For Millions

From the Gmail perspective, the security advisor will include a security sandbox where all email attachments will be scanned for malicious software employing a virtual environment to safely analyze said files. Google said the tool can “delay message delivery, allow customization of scan rules, and automatically move suspicious messages to the spam folder.” Gmail also gets enhanced safe browsing which gives additional protection by scanning incoming messages for malicious content before it is actually delivered. ... A Google spokesperson told me that the AI Geminin app is to get enterprise-grade security protections in core services now. With availability from October 15, for customers running on a Workspace Business, Enterprise, or Frontline plan, Google said that “with all of the core Workspace security and privacy controls in place, companies have the tools to deploy AI securely, privately and responsibly in their organizations in the specific way that they want it.” The critical components of this security move include ensuring Gemini is subject to the same privacy, security, and compliance policies as the rest of the Workspace core services, such as Gmail and Dos. 


The Next Big Interconnect Technology Could Be Plastic

e-Tube technology is a new, scalable interconnect platform that uses radio wave transmission over a dielectric waveguide made of – drumroll – common plastic material such as low-density polyethylene (LDPE). While waveguide theory has been studied for many years, only a few organizations have applied the technology for mainstream data interconnect applications. Because copper and optical interconnects are historically entrenched technologies, most research has focused on extending copper life or improving energy and cost efficiency of optical solutions. But now there is a shift toward exploring the e-Tube option that delivers a combination of benefits that copper and optical cannot, including energy-efficiency, low latency, cost-efficiency and scalability to multi-terabit network speeds required in next-gen data centers. The key metrics for data center cabling are peak throughput, energy efficiency, low latency, long cable reach and cost that enables mass deployment. Across these metrics, e-Tube technology provides advantages compared to copper and optical technologies. Traditionally, copper-based interconnects have been considered an inexpensive and reliable choice for short-reach data center applications, such as top-of-rack switch connections. 


From Theory to Action: Building a Strong Cyber Risk Governance Framework

Setting your risk appetite is about more than just throwing a number out there. It’s about understanding the types of risks you face and translating them into specific, measurable risk tolerance statements. For example, “We’re willing to absorb up to $1 million in cyber losses annually but no more.” Once you have that in place, you’ll find decision-making becomes much more straightforward. ... If your current cybersecurity budget isn't sufficient to handle your stated risk appetite, you may need to adjust it. One of the best ways to determine if your budget aligns with your risk appetite is by using loss exceedance curves (LECs). These handy charts allow you to visualize the forecasted likelihood and impact of potential cyber events. They help you decide where to invest more in cybersecurity and perhaps where even to cut back. ... One thing that a lot of organizations miss in their cyber risk governance framework is the effective use of cyber insurance. Here's the trick: cyber insurance shouldn’t be used to cover routine losses. Doing so will only lead to increased premiums. Instead, it should be your safety net for the larger, more catastrophic incidents – the kinds that keep executives awake at night.


Is Prompt Engineering Dead? How To Scale Enterprise GenAI Adoption

If you pick a model that is a poor fit for your use case, it will not be good at determining the context of the question and will fail at retrieving a reference point for the response. In those situations, the lack of reference data needed for providing an accurate response contributes to a hallucination. While there are many situations where you would prefer the model to give no response at all rather than fabricate one, what happens if there is no exact answer available is that the model will take some data points that it thinks are contextually relevant to the query and return an inaccurate answer. ... To leverage LLMs effectively at an enterprise scale, businesses need to understand their limitations. Prompt engineering and RAG can improve accuracy, but LLMs must be tightly limited in domain knowledge and scope. Each LLM should be trained for a specific use case, using a specific dataset with data owners providing feedback. This ensures no chance of confusing the model with information from different domains. The training process for LLMs differs from traditional machine learning, requiring human oversight and quality assurance by data owners.


AI disruption in Fintech: The dawn of smarter financial solutions

Financial institutions face diverse fraud challenges, from identity theft to fund transfer scams. Manual analysis of countless daily transactions is impractical. AI-based systems are empowering Fintechs to analyze data, detect anomalies, and flag suspicious activities. AI is monitoring transactions, filtering spam, and identifying malware. It can recognise social engineering patterns and alert users to potential threats. While fraudsters also use AI for sophisticated scams, financial institutions can leverage AI to identify synthetic content and distinguish between trustworthy and untrustworthy information. ... AI is transforming fintech customer service, enhancing retention and loyalty. It provides personalised, consistent experiences across channels, anticipating needs and offering value-driven recommendations. AI-powered chatbots handle common queries efficiently, allowing human agents to focus on complex issues. This technology enables 24/7 support across various platforms, meeting customer expectations for instant access. AI analytics predict customer needs based on financial history, transaction patterns, and life events, enabling targeted, timely offers. 


CIOs Go Bold

In business, someone who is bold is an individual who exudes confidence and assertiveness and is business savvy. However, there is a fine line between being assertive and confident in a way that is admired and being perceived as overbearing and hard to work with. ... If your personal CIO goals include being bolder, the first step is for you to self-assess. Then, look around. You probably already know individuals in the organization or colleagues in the C-suite who are perceived as being bold shakers and movers. What did they do to acquire this reputation? ... To get results from the ideas you propose, the outcomes of your ideas must solve strategic goals and/or pain points in the business. Consequently, the first rule of thumb for CIOs is to think beyond the IT box. Instead, ask questions like how an IT solution can help solve a particular challenge for the business. Digitalization is a prime example. Early digitalization projects started out with missions such as eliminating paper by digitalizing information and making it more searchable and accessible. Unfortunately, being able to search and access data was hard to quantify in terms of business results. 


What does the Cyber Security and Resilience Bill mean for businesses?

The Bill aims to strengthen the UK’s cyber defences by ensuring that critical infrastructure and digital services are secure by protecting those services and supply chains. It’s expected to share common ground with NIS2 but there are also some elements that are notably absent. These differences could mean the Bill is not quite as burdensome as its European counterpart but equally, it runs the risk of making it not as effective. ... The problem now is that many businesses will be looking at both sets of regulations and scratching their heads in confusion. Should they assume that the Bill will follow the trajectory of NIS2 and make preparations accordingly or should they assume it will continue to take a lighter touch and one that may not even apply to them? There’s no doubt that NIS2 will introduce a significant compliance burden with one report suggesting it will cost upwards of 31.2bn euros per year. Then there’s the issue of those that will need to comply with both sets of regulations i.e. those entities that either supply to customers or have offices on the continent. They will be looking for the types of commonalities we’ve explored here in order to harmonise their compliance efforts and achieve economies of scale. 


3 Key Practices for Perfecting Cloud Native Architecture

As microservices proliferate, managing their communication becomes increasingly complex. Service meshes like Istio or Linkerd offer a solution by handling service discovery, load balancing, and secure communication between services. This allows developers to focus on building features rather than getting bogged down by the intricacies of inter-service communication. ... Failures are inevitable in cloud native environments. Designing microservices with fault isolation in mind helps prevent a single service failure from cascading throughout the entire system. By implementing circuit breakers and retry mechanisms, organizations can enhance the resilience of their architecture, ensuring that their systems remain robust even in the face of unexpected challenges. ... Traditional CI/CD pipelines often become bottlenecks during the build and testing phases. To overcome this, modern CI/CD tools that support parallel execution should be leveraged. ... Not every code change necessitates a complete rebuild of the entire application. Organizations can significantly speed up the pipeline while conserving resources by implementing incremental builds and tests, which only recompile and retest the modified portions of the codebase.


Copilots and low-code apps are creating a new 'vast attack surface' - 4 ways to fix that

"In traditional application development, apps are carefully built throughout the software development lifecycle, where each app is continuously planned, designed, implemented, measured, and analyzed," they explain. "In modern business application development, however, no such checks and balances exists and a new form of shadow IT emerges." Within the range of copilot solutions, "anyone can build and access powerful business apps and copilots that access, transfer, and store sensitive data and contribute to critical business operations with just a couple clicks of the mouse or use of natural language text prompts," the study cautions. "The velocity and magnitude of this new wave of application development creates a new and vast attack surface." Many enterprises encouraging copilot and low-code development are "not fully embracing that they need to contextualize and understand not only how many apps and copilots are being built, but also the business context such as what data the app interacts with, who it is intended for, and what business function it is meant to accomplish."



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - September 24, 2024

Effective Strategies for Talking About Security Risks with Business Leaders

Like every difficult conversation, CISOs must pick the right time, place and strategy to discuss cyber risks with the executive team and staff. Instead of waiting for the opportunity to arise, CISOs should proactively engage with individuals at all levels of the organization to influence them and ensure an understanding of security policies and incident response. These conversations could come in the form of monthly or quarterly meetings with senior stakeholders to maintain the cadence and consistency of the conversations, discuss how the threat landscape is evolving and review their part of the business through a cybersecurity lens. They could also be casual watercooler chats with staff members, which not only help to educate and inform employees but also build vital internal relationships that can affect online behaviors. In addition to talking, CISOs must also listen to and learn about key stakeholders to tailor conversations around their interests and concerns. ... If you're talking to the board, you'll need to know the people around that table. What are their interests, and how can you communicate in a way that resonates with them and gets their attention? Use visualization techniques and find a "cyber ally" on the board who will back you and help reinforce your ideas and the information you share.


Is Explainable AI Explainable Enough Yet?

“More often than not, the higher the accuracy provided by an AI model, the more complex and less explainable it becomes, which makes developing explainable AI models challenging,” says Godbole. “The premise of these AI systems is that they can work with high-dimensional data and build non-linear relationships that are beyond human capabilities. This allows them to identify patterns at a large scale and provide higher accuracy. However, it becomes difficult to explain this non-linearity and provide simple, intuitive explanations in understandable terms.” Other challenges are providing explanations that are both comprehensive and easily understandable and the fact that businesses hesitate to explain their systems fully for fear of divulging intellectual property (IP) and losing their competitive advantage. “As we make progress towards more sophisticated AI systems, we may face greater challenges in explaining their decision-making processes. For autonomous systems, providing real-time explainability for critical decisions could be technically difficult, even though it will be highly necessary,” says Godbole. When AI is used in sensitive areas, it will become increasingly important to explain decisions that have significant ethical implications, but this will also be challenging.


The challenge of cloud computing forensics

Data replication across multiple locations complicates forensics processes that require the ability to pinpoint sources for analysis. Consider the challenge of retrieving deleted data from cloud systems—not just a technical obstacle, but a matter of accountability that is often not addressed by IT until it’s too late. Multitenancy involves shared resources among multiple users, making it difficult to distinguish and segregate data. This is a systemic problem for cloud security, and it is particularly problematic for cloud platform forensics. The NIST document acknowledges this challenge and recommends the implementation of access mechanisms and frameworks so companies can maintain data integrity and manage incident response. Thus, the mechanisms are in place to deal with issues once they occur because accounting happens on an ongoing basis. The lack of location transparency is a nightmare. Data resides in various physical jurisdictions, all with different laws and cultural considerations. Crimes may occur on a public cloud point of presence in a country that disallows warrants to examine the physical systems, whereas other countries have more options for law enforcement. Guess which countries the criminals choose to leverage.


Is the rise of genAI about to create an energy crisis?

Though data center power consumption is expected to double by 2028, according to IDC research director Sean Graham, it’s still a small percentage of overall energy consumption — just 18%. “So, it’s not fair to blame energy consumption on AI,” he said. “Now, I don’t mean to say AI isn’t using a lot of energy and data centers aren’t growing at a very fast rate. Data Center energy consumption is growing at 20% per year. That’s significant, but it’s still only 2.5% of the global energy demand. “It’s not like we can blame energy problems exclusively on AI,” Graham said. ... Beyond the pressure from genAI growth, electricity prices are rising due to supply and demand dynamics, environmental regulations, geopolitical events, and extreme weather events fueled in part by climate change, according to an IDC study published today. IDC believes the higher electricity prices of the last five years are likely to continue, making data centers considerably more expensive to operate. Amid that backdrop, electricity suppliers and other utilities have argued that AI creators and hosts should be required to pay higher prices for electricity — as cloud providers did before them — because they’re quickly consuming greater amounts of compute cycles and, therefore, energy compared to other users.


20 Years in Open Source: Resilience, Failure, Success

The rise of Big Tech has emphasized one of the most significant truths I’ve learned: the need for digital sovereignty. Over time, I’ve observed how centralized platforms can slowly erode consumers’ authority over their data and software. Today, more than ever, I believe that open source is a crucial path to regaining control — whether you’re an individual, a business, or a government. With open source software, you own your infrastructure, and you’re not subject to the whims of a vendor deciding to change prices, terms, or even direction. I’ve learned that part of being resilient in this industry means providing alternatives to centralized solutions. We built CryptPad — to offer an encrypted, privacy-respecting alternative to tools like Google Docs. It hasn’t been easy, but it’s a project I believe in because it aligns with my core belief: people should control their data. I would improve the way the community communicates the benefits of open source. The conversation all too frequently concentrates on “free vs. paid” software. In reality, what matters is the distinction between dependence and freedom. I’ve concluded that we need to explain better how individuals may take charge of their data, privacy, and future by utilizing open source.


20 Tech Pros On Top Trends In Software Testing

The shift toward AI-driven testing will revolutionize software quality assurance. AI can intelligently predict potential failures, adapt to changes and optimize testing processes, ensuring that products are not only reliable, but also innovative. This approach allows us to focus on creating user experiences that are intuitive and delightful. ... AI-driven test automation has been the trend that almost every client of ours has been asking for in the past year. Combining advanced self-healing test scripts and visual testing methodologies has proven to improve software quality. This process also reduces the time to market by helping break down complex tasks. ... With many new applications relying heavily on third-party APIs or software libraries, rigorous security auditing and testing of these services is crucial to avoid supply chain attacks against critical services. ... One trend that will become increasingly important is shift-left security testing. As software development accelerates, security risks are growing. Integrating security testing into the early stages of development—shifting left—enables teams to identify vulnerabilities earlier, reduce remediation costs and ensure secure coding practices, ultimately leading to more secure software releases.


How to manage shadow IT and reduce your attack surface

To effectively mitigate the risks associated with shadow IT, your organization should adopt a comprehensive approach that encompasses the following strategies:Understanding the root causes: Engage with different business units to identify the pain points that drive employees to seek unauthorized solutions. Streamline your IT processes to reduce friction and make it easier for employees to accomplish their tasks within approved channels, minimizing the temptation to bypass security measures. Educating employees: Raise awareness across your organization about the risks associated with shadow IT and provide approved alternatives. Foster a culture of collaboration and open communication between IT and business teams, encouraging employees to seek guidance and support when selecting technology solutions. Establishing clear policies: Define and communicate guidelines for the appropriate use of personal devices, software, and services. Enforce consequences for policy violations to ensure compliance and accountability. Leveraging technology: Implement tools that enable your IT team to continuously discover and monitor all unknown and unmanaged IT assets. 


How software teams should prepare for the digital twin and AI revolution

By integrating AI to enhance real-time analytics, users can develop a more nuanced understanding of emerging issues, improving situational awareness and allowing them to make better decisions. Using in-memory computing technology, digital twins produce real-time analytics results that users aggregate and query to continuously visualize the dynamics of a complex system and look for emerging issues that need attention. In the near future, generative AI-driven tools will magnify these capabilities by automatically generating queries, detecting anomalies, and then alerting users as needed. AI will create sophisticated data visualizations on dashboards that point to emerging issues, giving managers even better situational awareness and responsiveness. ... Digital twins can use ML techniques to monitor thousands of entry points and internal servers to detect unusual logins, access attempts, and processes. However, detecting patterns that integrate this information and create an overall threat assessment may require data aggregation and query to tie together the elements of a kill chain. Generative AI can assist personnel by using these tools to detect unusual behaviors and alert personnel who can carry the investigation forward.


The Open Source Software Balancing Act: How to Maximize the Benefits And Minimize the Risks

OSS has democratized access to cutting-edge technologies, fostered a culture of collaboration and empowered businesses to prioritize innovation. By tapping into the vast pool of open source components available, software developers can accelerate product development, minimize time-to-market and drive innovation at scale. ... Paying down technical debt requires two things, consistency and prioritization. First, organizations should opt for fewer high-quality suppliers with well-maintained open source projects because they have greater reliability and stability, reducing the likelihood of introducing bugs or issues into their own codebase that rack up tech debt. In terms of transparency, organizations must have complete visibility into their software infrastructure. This is another area where SBOMs are key. With an SBOM, developers have full visibility into every element of their software, which reduces the risk of using outdated or vulnerable components that contribute to technical debt. There’s no question that open source software offers unparalleled opportunities for innovation, collaboration and growth within the software development ecosystem. 


Is AI really going to burn the planet?

Trying to understand exactly how energy-intensive the training of datasets is, is even more complex than understanding exactly how big data center GHG sins are. A common “AI is environmentally bad” statistic is that training a large language model like GPT-3 is estimated to use just under 1,300-megawatt hours (MWh) of electricity, about as much power as consumed annually by 130 US homes, or the equivalent of watching 1.63 million hours of Netflix. The source for this stat is AI company Hugging Face, which does seem to have used some real science to arrive at these numbers. It also, to quote a May Hugging Face probe into all this, seems to have proven that "multi-purpose, generative architectures are orders of magnitude more [energy] expensive than task-specific systems for a variety of tasks, even when controlling for the number of model parameters.” It’s important to note that what’s being compared here are task-specific AI runs (optimized, smaller models trained in specific generative AI tasks) and multi-purpose (a machine learning model that should be able to process information from different modalities, including images, videos, and text).



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls

Daily Tech Digest - September 23, 2024

Clear as mud: global rules around AI are starting to take shape but remain a little fuzzy

There is some subjectivity within the EU efforts, as “high risk” is defined as able to cause harm to society, which could receive wildly different interpretations. That said, the effort comes from the right place, which is to protect and ensure the “fundamental rights of EU citizens.” The EU Council views the act as designed to stimulate investment and innovation, while at the same time, carving out exceptions for “military and defense as well as research purposes.” This perspective is not much different from the one the industry offered up in 2022 before the US Senate during discussions on the challenges of security, cybersecurity in the age of AI. At that hearing, two years ago, the Senate was urged not to stifle innovation as adversaries and economic competitors in other nations were not going to be slowing down their innovation. ... When I asked Price for his thoughts on the US position around global AI that many nations should work together to ensure safety without hampering evolution, he agreed that “security considerations must remain at the forefront of these discussions to ensure that widespread AI adoption does not inadvertently amplify cybersecurity risks.”


Turning Compliance Into Strategy: 4 Tips For Navigating AI Regulation

For Chief Strategy Officers (CSOs), helping their organizations to understand and adapt to AI regulation is essential. CSOs can play a key role in guiding their organizations to turn compliance into strategy ... Establish effective governance frameworks that align with the AI Act’s requirements. This framework should include clear policies on data usage, transparency, accountability and ethical AI practices, as well as implementing AI-driven technologies, to help manage risks. Additionally, developing a governance structure that includes roles and responsibilities for AI oversight, and working with operational leaders to embed governance practices into day-to-day business operations can support the company’s long-term success and ethical reputation. ... Companies that form strategic partnerships are better positioned to stay competitive in the market, helping them navigate regulations like the AI Act. By combining the unique strengths of each partner, business leaders can develop more robust and scalable solutions that are better equipped to handle the nuances of regulations. ... The EU AI Act marks a significant shift in the regulatory landscape, challenging businesses to rethink how they develop and deploy AI technologies. 


‘Harvest now, decrypt later’: Why hackers are waiting for quantum computing

The “harvest now, decrypt later” phenomenon in cyberattacks — where attackers steal encrypted information in the hopes they will eventually be able to decrypt it — is becoming common. As quantum computing technology develops, it will only grow more prevalent. ... The average hacker will not be able to get a quantum computer for years — maybe even decades — because they are incredibly costly, resource-intensive, sensitive and prone to errors if they are not kept in ideal conditions. To clarify, these sensitive machines must stay just above absolute zero (459 degrees Fahrenheit to be exact) because thermal noise can interfere with their operations. However, quantum computing technology is advancing daily. Researchers are trying to make these computers smaller, easier to use and more reliable. Soon, they may become accessible enough that the average person can own one. ... The Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST) soon plan to release post-quantum cryptographic standards. The agencies are leveraging the latest techniques to make ciphers quantum computers cannot crack. 


AI-driven demand forecasting ensures we’re ‘game-ready’ by predicting user behaviour and traffic

At Dream Sports, AI and machine learning are central to enhancing user experiences, optimising predictions, and securing our platform. AI-driven demand forecasting ensures we’re “game-ready” by predicting user behaviour and traffic for smooth gameplay during peak times. With over 250 million users, our ML systems safeguard platform integrity, detecting and preventing violations to ensure fair play. We also leverage ML to personalise user experiences, optimise rewards programs, and use causal inference for data-driven decisions across game recommendations and contest management. Generative AI initiatives include developing an AI Coach and enhancing user verification and customer success systems. Our collaboration with Columbia University’s Dream Sports AI Innovation Centre advances AI/ML applications in sports, focusing on predictive modelling, fan engagement, and sports tech optimisation. This partnership, alongside internal initiatives, helps us lead in reshaping sports technology with more immersive, personalised experiences through the rise of generative AI.


5 things your board needs to know about innovation and thought leadership

The most successful organizations have a programmatic approach to managing innovation and thought leadership, which helps them build organizational competency over time in both disciplines. How it’s structured is less important since it can be centralized, decentralized, or hybrid, but having a defined program with a mission, vision, strategy, and operating plan at a minimum is critical. As an example, the US Navy set a vision for 2030 related to the future of naval information warfare, creating a Hollywood-produced video, which became a north star for the organization, unlocking millions in funding for AI. The focus and types of innovation and thought leadership you pursue are important, too. In addition to an internal and client-facing focus, have a known set of innovation enablers you plan to pursue such as data and analytics, automation, adaptability, cloud, digital twins and AI, but be open to adding others as needed. The same is true for your editorial calendar for thought leadership and the topics you plan to address. And hear out new thought leadership topics that may come from left field, which could benefit customers. In addition, keep the board appraised on your multi-year innovation journey, goals and objectives. 


Cloud Security Risk Prioritization is Broken. Here’s How to Fix It.

Business context is critical. It’s easy to understand, for example, a CVE in a payment application is a high priority. Whereas, the same CVE in a search application is low priority. Security programs must also take this into account. Effective security paradigms understand which detected vulnerabilities have the greatest business impact, so security teams aren’t spending time prioritizing lower-risk vulnerabilities. Traditional security applications run tests on code before it’s pushed. While this pre-production testing is still a best practice, it misses how code interacts with the environmental variables, configurations, and sensitive data it will coexist with once deployed. This insight is essential when you’re working to understand how a cloud-native application will function when live. Technologies such as application security posture management (ASPM) facilitate a more proactive approach by automating security review processes in production and creating a live view of an application, its vulnerabilities, and business risks. ASPM provides visibility into what’s happening in the cloud, giving security teams a better understanding of application behavior and attack surfaces so they can prioritize appropriately. 


A Look Inside the World of Ethical Hacking to Benefit Security

While there can be many different siloes and areas of focus within the ethical hacking community, enterprises tend to interact with these experts in a few different ways. Penetration testing is a common connection between enterprises and ethical hackers, often one driven by compliance requirements. Larger, more mature organizations may employ penetration testers internally in addition to contracting with third parties. While many organizations rely solely on third parties. Enterprises may also engage ethical hackers to participate in red teaming exercises, simulations of real-world attacks. Typically, these exercises have a specific objective, and ethical hackers are free to use whatever means available to achieve that objective. Hannan offers a physical security assessment as an example of a red teaming exercise. “Walk into a building, find an unlocked computer, and plug a USB device into the computer,” he details. “That might be one of your objectives. How do you get into the building? Do you impersonate a delivery person? Do you impersonate an HVAC person? Do you just show up in a yellow vest and a hard hat and walk into the building? That's left up to you.”


Offensive cyber operations are more than just attacks

AI is already transforming offensive cyber operations by expanding data visibility and streamlining threat intelligence, which are critical for both defensive and offensive purposes. AI enables faster decision-making and the ability to predict and respond to threats more effectively. However, it also empowers adversaries, allowing for more sophisticated attacks which could include generating deepfakes, designing advanced malware, and spreading misinformation at an unprecedented scale on social media platforms. Quantum computing, while still in its early stages, poses a significant long-term challenge. Its potential to break current encryption methods could render many of today’s cybersecurity practices obsolete, creating new vulnerabilities for exploitation. ... A key limitation is time. Once a threat is identified, the race to harden systems and close vulnerabilities begins. The longer it takes to respond, the more risk organizations face. As threats become more sophisticated, defenders must continuously adapt and anticipate new methods of attack, making speed, agility, and proactive defense critical factors in minimizing exposure and mitigating risk.


Quantum Risks Pose New Threats for US Federal Cybersecurity

Adversaries including China are investing heavily in quantum computing in an apparent effort to outpace the United States, where bureaucratic red tape and unforeseen costs could significantly hinder federal efforts to keep up. "Upgrading this infrastructure isn’t going to be quick or cheap," said Georgianna Shea, chief technologist of the Foundation for Defense of Democracies' Center on Cyber and Technology Innovation. Testing for quantum-resistant encryption could reveal compatibility issues with legacy systems, such as increased power demands, reduced performance, larger key sizes and the need to adjust existing protocols and application stacks for keys and digital signatures, she told Information Security Media Group. The Foundation for Defense of Democracies is set to release new guidance for CIOs on Monday that will aim to lay out a road map for quantum readiness. The report is structured as a six-point plan that includes designating a leader, taking inventory of all encryption systems, prioritizing based on risk, understanding mitigation strategies, developing a transition plan and regularly monitoring and adjusting it as needed.


The Rise of Generative AI Fuels Focus on Data Quality

Traditionally, data quality initiatives have often been isolated efforts, disconnected from core business goals and strategic initiatives. Some data quality initiatives are compliance-focused, data cleaning, or departmental efforts — all are very important but not directly tied to larger business goals. This makes it difficult to quantify the impact of data quality improvements and secure the necessary investment. As a result, data quality struggles to gain the crucial attention it deserves. However, the rise of GenAI presents a game-changer for enterprises. GenAI apps rely heavily on high-quality data to generate accurate and reliable results. ... Organizations need a new way to organize the data and make it GenAI-ready, making sure it is continuously synced with the source systems, continuously cleansed according to a company's data quality policies, and continuously protected. But the solution extends beyond technology. Organizations must prioritize data quality by establishing key performance indicators (KPIs) directly linked to GenAI success, such as customer satisfaction, resolution rate, and response time.



Quote for the day:

“If you want to make a permanent change, stop focusing on the size of your problems and start focusing on the size of you!” -- T. Harv Eker

Daily Tech Digest - September 22, 2024

Cloud Exit: 42% of Companies Move Data Back On-Premises

Agarwal said: ‘Nobody is running a cloud business as a charity.’ When businesses reach a size where it is economically viable, constructing their own infrastructure can save significant costs while eliminating the ‘cloud middleman’ and associated expenses. That said, the cloud is certainly not “Just someone else’s computer,” as the joke goes. It has added immense value to those who adapted to it. But like artificial intelligence (AI), it has been mythologized and exaggerated as the ultimate tool for efficiency — romanticized to the point where pervasive myths about cost-effectiveness, reliability, and security are enough for businesses to dive headfirst into adoption. These myths are frequently discussed in high-profile forums, shaping perceptions that may not always align with reality, leading many to commit without fully considering potential drawbacks and real-world challenges. ... Avoidable charges and cloud waste were another noteworthy issue revealed in the 2023 State of Cloud Strategy Survey by Hashicorp. 94% of respondents in this survey reported incurring unnecessary expenses because of the underutilization of cloud resources. These costs often result from maintaining idle resources that do not cater to any of the company’s actual operational needs. 


Revitalize aging data centers

Before tackling the specifics of upgrading a data center, it is important to conduct a thorough assessment to identify the specific needs and areas for improvement. This assessment should examine the data center's existing infrastructure, including server capacity, storage solutions, and energy consumption. It is also important to evaluate how these elements stack up against current power standards, grid connection requirements, efficiency benchmarks, and environmental and permit regulations. By benchmarking against newer facilities, operators can identify key areas where technological and infrastructural enhancements are needed. ... While integrating the latest server technologies might seem obvious, these systems demand different support from existing infrastructure. The increased computational loads should not compromise system reliability. Therefore, transitioning to newer generations of processors can result in updates of your data center support infrastructure. This includes upgrading power distribution units (PDUs) to handle higher power densities, enhancing network infrastructure to support faster data transfer rates, and reinforcing structural components to accommodate the increased weight and space requirements of modern equipment.


Personhood: Cybersecurity’s next great authentication battle as AI improves

Although intriguing, the personhood plan has fundamental issues. First, credentials are very easily faked by gen AI systems. Second, customers may be hard-pressed to take the significant time and effort to gather documents and wait in line at a government office to prove that they are human simply to visit public websites or sales call centers. Some argue that the mass creation of humanity cookies would create another pivotal cybersecurity weak spot. “What if I get control of the devices that have the humanity cookie on it?” FaceTec’s Meier asks. “The Chinese might then have a billion humanity cookies at one person’s control.” Brian Levine, a managing director for cybersecurity at Ernst & Young, believes that, while such a system might be helpful in the short run, it likely won’t effectively protect enterprises for long. “It’s the same cat-and-mouse game” that cybersecurity vendors have always played with attackers, Levine says. ... Sandy Carielli, a Forrester principal analyst and lead author of the Forrester bot report, says a critical element of any bot defense program is to not delay good bots, such as legitimate search engine spiders, in the quest to block bad ones.“The crux of any bot management system has to be that it never introduces friction for good bots and certainly not for legitimate customers. 


What’s behind the return-to-office demands?

The effect is clear: an average employee wants to work three days a week in the office, while managers want them there four days. The managers win, of course: today half of all civil servants in Stockholm County work in the office four days a week, a clear increase. There are different conclusions one can draw. Mine are these: Physical workplaces and physical interaction are better than digital workspaces and meetings when it comes to creative tasks and social/cultural togetherness. I think, depending on what you work with, employees and managers are quite in agreement. Leadership in the hybrid work models has not developed in the ways and at the pace required. Managers still have an excessive need for control, with no way to deal with this without trying to return to what was previously comfortable. Employees have probably not managed to convey to their bosses the positive aspects of home work — for the employer. It’s great that your life puzzle is easier and you can take power walks and do laundry, but how does that help the company? It’s no wonder that whispering about sneaky vacations is taking off. And there’s an elephant in the room we should talk about — people really hate open office spaces and activity-based workplaces.


Passwordless AND Keyless: The Future of (Privileged) Access Management

Because SSH keys are functionally different from passwords, traditional PAMs don't manage them very well. Legacy PAMs were built to vault passwords, and they try to do the same with keys. Without going into too much detail about key functionality (like public and private keys), vaulting private keys and handing them out at request simply doesn't work. Keys must be secured at the server side, otherwise keeping them under control is a futile effort. Furthermore, your solution needs to discover keys first to manage them. Most PAMs can't. There are also key configuration files and other key(!) elements involved that traditional PAMs miss. ... Let's come back to the topic of passwords. Even if you have them vaulted, you aren't managing them in the best possible way. Modern, dynamic environments - using in-house or hosted cloud servers, containers, or Kubernetes orchestration - don't work well with vaults or with PAMs that were built 20 years ago. This is why we offer modern ephemeral access where the secrets needed to access a target are granted just-in-time for the session, and they automatically expire once the authentication is done. This leaves no passwords or keys to manage - at all.


Cybersecurity is Beyond Protecting Personal Data

Cyberattacks are not just about stealing personal data; they also involve stealing intellectual property and sensitive corporate information. In India, the number of data breaches has surged in recent years. The Indian Computer Emergency Response Team (CERT-IN) reported over 150,000 cyber incidents in 2023 alone, with significant breaches occurring in sectors such as finance, healthcare, and government. ... While there is a global scarcity of competent cybersecurity personnel, India is experiencing an exceptionally severe shortfall. A report conducted by (ISC)² indicates that there is a 3 million cybersecurity workforce shortage worldwide, with India contributing significantly to this shortfall. This deficiency hinders businesses' capacity to detect and address cyber threats that should be looked after by team members' ignorance and lack of training might lead to human mistakes, which are a common way for cyberattacks to get started. ... Compliance with cybersecurity legislation and standards is critical for data protection and retaining confidence. India's legal landscape is changing, with initiatives like the Information Technology Act and the Personal Data Protection Bill aimed at improving cybersecurity. 


Google calls for halting use of WHOIS for TLS domain verifications

TLS certificates are the cryptographic credentials that underpin HTTPS connections, a critical component of online communications verifying that a server belongs to a trusted entity and encrypts all traffic passing between it and an end user. ... The rules for how certificates are issued and the process for verifying the rightful owner of a domain are left to the CA/Browser Forum. One "base requirement rule" allows CAs to send an email to an address listed in the WHOIS record for the domain being applied for. When the receiver clicks an enclosed link, the certificate is automatically approved. ... Specifically, watchTowr researchers were able to receive a verification link for any domain ending in .mobi, including ones they didn’t own. The researchers did this by deploying a fake WHOIS server and populating it with fake records. Creation of the fake server was possible because dotmobiregistry.net—the previous domain hosting the WHOIS server for .mobi domains—was allowed to expire after the server was relocated to a new domain. watchTowr researchers registered the domain, set up the imposter WHOIS server, and found that CAs continued to rely on it to verify ownership of .mobi domains.


How API Security Fits into DORA Compliance: Everything You Need to Know

Financial institutions rely heavily on third-party service providers, and APIs are the gateway through which many of these vendors access core banking systems. This introduces significant risk, as third-party APIs may become the weakest link in the supply chain. DORA places substantial emphasis on managing these risks, as outlined in Article 28, stating that financial entities must ensure that third-party providers “implement and maintain appropriate measures to manage ICT risks" and that institutions must "ensure the quality and integration of ICT services provided by third parties." You need to start simple and to be able to answer two questions: Who are your vendors? What third-party apps do you have connected? One of the biggest challenges here is the concept of shadow APIs—those untracked, unauthorized, or forgotten endpoints that can remain active long after their intended purpose. Shadow APIs expose financial institutions to vulnerabilities, making it difficult to track and control third-party access. DORA’s Article 28 further reinforces the need for financial institutions to "assess third-party ICT service providers’ ability to protect the integrity, security, and confidentiality of data, and to manage risks related to outsourcing."


Dirty code still runs, and that’s not a good thing

Quality code benefits developers by minimizing the time and effort spent on patching and refactoring later. Having confidence that code is clean also enhances collaboration, allowing developers to more easily reuse code from colleagues or AI tools. This not only simplifies their work but also reduces the need for retroactive fixes and helps prevent and lower technical debt. To deliver clean code, it’s important to note that developers should start with the right guardrails, tests, and analysis from the beginning, in the IDE. Pairing unit testing with static analysis can also guarantee quality. The sooner these reviews happen in the development process, the better. ... Developers and businesses can’t afford to perpetuate the cycle of bad code and, consequently, subpar software. Pushing poor-quality code through to development will only reintroduce software that breaks down later, even if it seems to run fine in the interim. To end the cycle, developers must deliver software built on clean code before deploying it. By implementing effective reviews and tests that gatekeep bad code before it becomes a major problem, developers can better equip themselves to deliver software with both functionality and longevity. 


The Perfect Balance: Merging AI and Design Thinking for Innovative Pricing Strategies

This combination of AI’s optimization and Design Thinking’s creative transformation is exactly what modern businesses need to stay competitive. Relying solely on AI to adjust pricing may lead to efficiency gains, but without the innovation brought by Design Thinking, businesses risk missing out on new opportunities to reshape their pricing models and align them more closely with customer needs. Conversely, while Design Thinking can spark innovation, without AI’s precision, companies might struggle to implement their ideas in a way that maximizes profitability. It is by uniting these two approaches that organizations can build pricing strategies that are both efficient and forward-looking. For businesses, pricing is a powerful lever that influences profitability, market position, and customer perception. In today’s competitive landscape, those that fail to leverage both AI and Design Thinking risk falling behind. AI offers the operational benefits of real-time optimization, driving immediate financial returns. Design Thinking provides the creative space to explore new value propositions and pricing structures that can secure long-term customer loyalty. 



Quote for the day:

"A sense of humor is part of the art of leadership, of getting along with people, of getting things done." -- Dwight D. Eisenhower

Daily Tech Digest - September 21, 2024

Quantinuum Scientists Successfully Teleport Logical Qubit With Fault Tolerance And Fidelity

This research advances quantum computing by making teleportation a reliable tool for quantum systems. Teleportation is essential in quantum algorithms and network designs, particularly in systems where moving qubits physically is difficult or impossible. By implementing teleportation in a fault-tolerant manner, Quantinuum’s research brings the field closer to practical, large-scale quantum computing systems. The fidelity of the teleportation also suggests that future quantum networks could reliably transmit quantum states over long distances, enabling new forms of secure communication and distributed quantum computing. The use of QEC in these experiments is especially promising, as error correction is one of the key challenges in making quantum computing scalable. Without fault tolerance, quantum states are prone to errors caused by environmental noise, making complex computations unreliable. The fact that Quantinuum achieved high fidelity using real-time QEC demonstrates the increasing maturity of its hardware and software systems.


Adversarial attacks on AI models are rising: what should you do now?

Adversarial attacks on ML models look to exploit gaps by intentionally attempting to redirect the model with inputs, corrupted data, jailbreak prompts and by hiding malicious commands in images loaded back into a model for analysis. Attackers fine-tune adversarial attacks to make models deliver false predictions and classifications, producing the wrong output. ... Disrupting entire networks with adversarial ML attacks is the stealth attack strategy nation-states are betting on to disrupt their adversaries’ infrastructure, which will have a cascading effect across supply chains. The 2024 Annual Threat Assessment of the U.S. Intelligence Community provides a sobering look at how important it is to protect networks from adversarial ML model attacks and why businesses need to consider better securing their private networks against adversarial ML attacks. ... Machine learning models can be manipulated without adversarial training. Adversarial training uses adverse examples and significantly strengthens a model’s defenses. Researchers say adversarial training improves robustness but requires longer training times and may trade accuracy for resilience.


4 ways to become a more effective business leader

Delivering quantitative results isn't the only factor that defines effective leaders -- great managers also possess the right qualitative skills, including being able to communicate and collaborate with their peers. "Once you reach that higher level in the business, particularly if you are part of the executive committee, you need to know how to deal with corporate politics," said Vogel. Managers must recognize that underlying corporate politics can be made with social motivations in mind. Great leaders see the signs. "If you're unable to read the room and understand and navigate that context, it's going to be tough," said Vogel. ... The rapid pace of change in modern organizations represents a huge challenge for all business leaders. Vogel instructed would-be executives to keep learning. "Especially at the moment, and the world we work in, you need to upskill yourself," she said. "We have had so much change happening in the business."Vogel said technology is a key factor in the rapid pace of change. The past two years have seen huge demands for Gen AI and machine learning. In the future, technological innovations around blockchain, quantum computing, and robotics will lead to more pressure for digital transformation.


Cloud architects: Try thinking like a CFO

Cloud architects must cut through the hype and focus on real-world applications and benefits. More than mere technological enhancement is required; architects must make a clear financial case. This is particularly apt in environments where executive decision-makers demand justification for every technology dollar spent. Aligning cloud architecture strategies with business outcomes requires architects to step beyond traditional roles and strategically engage with critical financial metrics. For example, reducing operational expenses through efficient cloud resource management will directly impact a company’s bottom line. A successful cloud architect will provide CFOs with predictive analytics and cost-saving projections, demonstrating clear business value and market advantage. Moreover, the increasing pressure on businesses to operate sustainably allows architects to leverage the cloud’s potential for greener operations. These are often strategic wins that CFOs can directly appreciate in terms of corporate financial and social governance metrics. However, when I bring up the topic of sustainability, I receive a lot of nods, but few people seem to care. 


Wherever There's Ransomware, There's Service Account Compromise. Are You Protected?

Most service accounts are created to access other machines. That inevitably implies that they have the required access privileges to log-in and execute code on these machines. This is exactly what threat actors are after, as compromising these accounts would render them the ability to access and execute their malicious payload. ... Some service accounts, especially those that are associated with an installed on-prem software, are known to the IT and IAM staff. However, many are created ad-hoc by IT and identity personnel with no documentation. This makes the task of maintaining a monitored inventory of service accounts close to impossible. This plays well in attackers' hands as compromising and abusing an unmonitored account has a far greater chance of going undetected by the attack's victim. ... The common security measures that are used for the prevention of account compromise are MFA and PAM. MFA can't be applied to service accounts because they are not human and don't own a phone, hardware token, or any other additional factor that can be used to verify their identity beyond their username and passwords. PAM solutions also struggle with the protection of service accounts.


Datacenters bleed watts and cash – all because they're afraid to flip a switch

The good news is CPU vendors have developed all manner of techniques for managing power and performance over the years. Many of these are rooted in mobile applications, where energy consumption is a far more important metric than in the datacenter. According to Uptime, these controls can have a major impact on system power consumption and don't necessarily have to kneecap the chip by limiting its peak performance. The most power efficient of these regimes, according to Uptime, are software-based controls, which have the potential to cut system power consumption by anywhere from 25 to 50 percent – depending on how sophisticated the operating system governor and power plan are. However, these software-level controls also have the potential to impart the biggest latency hit. This potentially makes these controls impractical for bursty or latency-sensitive jobs. By comparison, Uptime found that hardware-only implementations designed to set performance targets tend to be far faster when switching between states – which means a lower latency hit. The trade-off is the power savings aren't nearly as impressive, topping out around ten percent.


An AI-Driven Approach to Risk-Scoring Systems in Cybersecurity

The integration of AI into risk-scoring systems also enhances the overall security strategy of an organization. These systems are not static, but rather learn and adapt over time, becoming increasingly effective as they encounter new threat patterns and scenarios. This adaptive capability is crucial in the face of rapidly evolving cyber threats, allowing organizations to stay one step ahead of potential attackers. An example of this in action is detecting anomalies during user sign-on by analyzing physical attributes and comparing them to typical behavior patterns. ... It's important, however, to realize that AI is not a cure-all for every cybersecurity challenge. The most impactful strategies combine the analytical power of AI with human expertise. While AI excels at processing vast amounts of data and identifying patterns, human analysts provide critical contextual understanding and decision-making capabilities. It's crucial for AI systems to continuously learn from the input of small and medium-sized enterprises (SMEs) through a feedback loop to refine their accuracy and minimize alert fatigue; this collaboration between human and artificial intelligence creates a robust defense against a wide range of cyber threats.


API Security in Financial Services: Navigating Regulatory and Operational Challenges

API breaches can have devastating consequences, including data loss, brand damage, financial losses, and customer attrition. For example, a breach that exposes customer account information can lead to financial theft and identity fraud. The reputational damage from such incidents can result in loss of customer trust and increased scrutiny from regulators. Institutions must recognize the potential fallout from breaches and take proactive steps to mitigate these risks, understanding that the cost of breaches often far exceeds the investment in robust security measures. ... Common security controls such as encryption, data loss prevention, and web application firewalls are widely used, yet their effectiveness remains limited. The report indicates that 45% of financial institutions can only prevent half or fewer API attacks, underscoring the need for improved security strategies and tools. Encryption, while essential, only protects data at rest and in transit, leaving APIs vulnerable to other types of attacks like injection and denial-of-service. Further, data loss prevention systems often struggle to keep pace with the volume and complexity of API traffic.


Guide To Navigating the Legal Perils After a Cyber Incident

Cyber incidents pose significant technical challenges, but the real storm often hits after the breach gets contained, Nall said. That’s when regulators step in to scrutinize every decision made in the heat of the crisis. While scrutiny has traditionally focused on corporate leadership or legal departments, today, infosec workers risk facing charges of fraud, negligence, or worse, simply for doing their jobs. ... Instead of clear, universal cybersecurity standards, regulatory bodies like the SEC only define acceptable practices after a breach occurs, Nall said. This reactive approach puts CISOs and other infosec workers at a distinct disadvantage. "Federal prosecutors and SEC attorneys read the paper like anyone else, and when they see bad things happening, like major breaches, especially where there is a delay in disclosure, they have to go after those companies," Nall explained during her presentation. ... Fortunately, CISOs and other infosec workers can take several concrete steps to protect their careers and reputations. By implementing airtight communication practices and negotiating solid legal protections, they can navigate the fallout of a disastrous cyber incident. 


As the AI Bubble Deflates, the Ethics of Hype Are in the Spotlight

One of the major problems we’re seeing right now in the AI industry is the overpromising of what AI tools can actually do. There’s a huge amount of excitement around AI’s observational capacities, or the notion that AI can see things that are otherwise unobservable to the human eye due to these tools’ ability to discern trends from huge amounts of data. However, these observational capacities are not only overstated, but also often completely misleading. They lead to AI being attributed almost magical powers, whereas in reality a large number of AI products grossly underperform compared to what they’re promised to do. ... So, the true believers caught up in the promises and excitement are likely to be disappointed. But throughout the hype cycle, many notable figures including practitioners and researchers have challenged narratives about the unconstrained transformational potential of AI. Some have expressed alarm at the mechanisms, techniques, and behavior at play which allowed such unbridled fervour to override the healthy caution necessary ahead of the deployment of any emerging technology, especially one which has the potential for such massive societal, environmental, and social upheaval.



Quote for the day:

“Start each day with a positive thought and a grateful heart.” -- Roy T. Bennett