Daily Tech Digest - September 24, 2024

Effective Strategies for Talking About Security Risks with Business Leaders

Like every difficult conversation, CISOs must pick the right time, place and strategy to discuss cyber risks with the executive team and staff. Instead of waiting for the opportunity to arise, CISOs should proactively engage with individuals at all levels of the organization to influence them and ensure an understanding of security policies and incident response. These conversations could come in the form of monthly or quarterly meetings with senior stakeholders to maintain the cadence and consistency of the conversations, discuss how the threat landscape is evolving and review their part of the business through a cybersecurity lens. They could also be casual watercooler chats with staff members, which not only help to educate and inform employees but also build vital internal relationships that can affect online behaviors. In addition to talking, CISOs must also listen to and learn about key stakeholders to tailor conversations around their interests and concerns. ... If you're talking to the board, you'll need to know the people around that table. What are their interests, and how can you communicate in a way that resonates with them and gets their attention? Use visualization techniques and find a "cyber ally" on the board who will back you and help reinforce your ideas and the information you share.


Is Explainable AI Explainable Enough Yet?

“More often than not, the higher the accuracy provided by an AI model, the more complex and less explainable it becomes, which makes developing explainable AI models challenging,” says Godbole. “The premise of these AI systems is that they can work with high-dimensional data and build non-linear relationships that are beyond human capabilities. This allows them to identify patterns at a large scale and provide higher accuracy. However, it becomes difficult to explain this non-linearity and provide simple, intuitive explanations in understandable terms.” Other challenges are providing explanations that are both comprehensive and easily understandable and the fact that businesses hesitate to explain their systems fully for fear of divulging intellectual property (IP) and losing their competitive advantage. “As we make progress towards more sophisticated AI systems, we may face greater challenges in explaining their decision-making processes. For autonomous systems, providing real-time explainability for critical decisions could be technically difficult, even though it will be highly necessary,” says Godbole. When AI is used in sensitive areas, it will become increasingly important to explain decisions that have significant ethical implications, but this will also be challenging.


The challenge of cloud computing forensics

Data replication across multiple locations complicates forensics processes that require the ability to pinpoint sources for analysis. Consider the challenge of retrieving deleted data from cloud systems—not just a technical obstacle, but a matter of accountability that is often not addressed by IT until it’s too late. Multitenancy involves shared resources among multiple users, making it difficult to distinguish and segregate data. This is a systemic problem for cloud security, and it is particularly problematic for cloud platform forensics. The NIST document acknowledges this challenge and recommends the implementation of access mechanisms and frameworks so companies can maintain data integrity and manage incident response. Thus, the mechanisms are in place to deal with issues once they occur because accounting happens on an ongoing basis. The lack of location transparency is a nightmare. Data resides in various physical jurisdictions, all with different laws and cultural considerations. Crimes may occur on a public cloud point of presence in a country that disallows warrants to examine the physical systems, whereas other countries have more options for law enforcement. Guess which countries the criminals choose to leverage.


Is the rise of genAI about to create an energy crisis?

Though data center power consumption is expected to double by 2028, according to IDC research director Sean Graham, it’s still a small percentage of overall energy consumption — just 18%. “So, it’s not fair to blame energy consumption on AI,” he said. “Now, I don’t mean to say AI isn’t using a lot of energy and data centers aren’t growing at a very fast rate. Data Center energy consumption is growing at 20% per year. That’s significant, but it’s still only 2.5% of the global energy demand. “It’s not like we can blame energy problems exclusively on AI,” Graham said. ... Beyond the pressure from genAI growth, electricity prices are rising due to supply and demand dynamics, environmental regulations, geopolitical events, and extreme weather events fueled in part by climate change, according to an IDC study published today. IDC believes the higher electricity prices of the last five years are likely to continue, making data centers considerably more expensive to operate. Amid that backdrop, electricity suppliers and other utilities have argued that AI creators and hosts should be required to pay higher prices for electricity — as cloud providers did before them — because they’re quickly consuming greater amounts of compute cycles and, therefore, energy compared to other users.


20 Years in Open Source: Resilience, Failure, Success

The rise of Big Tech has emphasized one of the most significant truths I’ve learned: the need for digital sovereignty. Over time, I’ve observed how centralized platforms can slowly erode consumers’ authority over their data and software. Today, more than ever, I believe that open source is a crucial path to regaining control — whether you’re an individual, a business, or a government. With open source software, you own your infrastructure, and you’re not subject to the whims of a vendor deciding to change prices, terms, or even direction. I’ve learned that part of being resilient in this industry means providing alternatives to centralized solutions. We built CryptPad — to offer an encrypted, privacy-respecting alternative to tools like Google Docs. It hasn’t been easy, but it’s a project I believe in because it aligns with my core belief: people should control their data. I would improve the way the community communicates the benefits of open source. The conversation all too frequently concentrates on “free vs. paid” software. In reality, what matters is the distinction between dependence and freedom. I’ve concluded that we need to explain better how individuals may take charge of their data, privacy, and future by utilizing open source.


20 Tech Pros On Top Trends In Software Testing

The shift toward AI-driven testing will revolutionize software quality assurance. AI can intelligently predict potential failures, adapt to changes and optimize testing processes, ensuring that products are not only reliable, but also innovative. This approach allows us to focus on creating user experiences that are intuitive and delightful. ... AI-driven test automation has been the trend that almost every client of ours has been asking for in the past year. Combining advanced self-healing test scripts and visual testing methodologies has proven to improve software quality. This process also reduces the time to market by helping break down complex tasks. ... With many new applications relying heavily on third-party APIs or software libraries, rigorous security auditing and testing of these services is crucial to avoid supply chain attacks against critical services. ... One trend that will become increasingly important is shift-left security testing. As software development accelerates, security risks are growing. Integrating security testing into the early stages of development—shifting left—enables teams to identify vulnerabilities earlier, reduce remediation costs and ensure secure coding practices, ultimately leading to more secure software releases.


How to manage shadow IT and reduce your attack surface

To effectively mitigate the risks associated with shadow IT, your organization should adopt a comprehensive approach that encompasses the following strategies:Understanding the root causes: Engage with different business units to identify the pain points that drive employees to seek unauthorized solutions. Streamline your IT processes to reduce friction and make it easier for employees to accomplish their tasks within approved channels, minimizing the temptation to bypass security measures. Educating employees: Raise awareness across your organization about the risks associated with shadow IT and provide approved alternatives. Foster a culture of collaboration and open communication between IT and business teams, encouraging employees to seek guidance and support when selecting technology solutions. Establishing clear policies: Define and communicate guidelines for the appropriate use of personal devices, software, and services. Enforce consequences for policy violations to ensure compliance and accountability. Leveraging technology: Implement tools that enable your IT team to continuously discover and monitor all unknown and unmanaged IT assets. 


How software teams should prepare for the digital twin and AI revolution

By integrating AI to enhance real-time analytics, users can develop a more nuanced understanding of emerging issues, improving situational awareness and allowing them to make better decisions. Using in-memory computing technology, digital twins produce real-time analytics results that users aggregate and query to continuously visualize the dynamics of a complex system and look for emerging issues that need attention. In the near future, generative AI-driven tools will magnify these capabilities by automatically generating queries, detecting anomalies, and then alerting users as needed. AI will create sophisticated data visualizations on dashboards that point to emerging issues, giving managers even better situational awareness and responsiveness. ... Digital twins can use ML techniques to monitor thousands of entry points and internal servers to detect unusual logins, access attempts, and processes. However, detecting patterns that integrate this information and create an overall threat assessment may require data aggregation and query to tie together the elements of a kill chain. Generative AI can assist personnel by using these tools to detect unusual behaviors and alert personnel who can carry the investigation forward.


The Open Source Software Balancing Act: How to Maximize the Benefits And Minimize the Risks

OSS has democratized access to cutting-edge technologies, fostered a culture of collaboration and empowered businesses to prioritize innovation. By tapping into the vast pool of open source components available, software developers can accelerate product development, minimize time-to-market and drive innovation at scale. ... Paying down technical debt requires two things, consistency and prioritization. First, organizations should opt for fewer high-quality suppliers with well-maintained open source projects because they have greater reliability and stability, reducing the likelihood of introducing bugs or issues into their own codebase that rack up tech debt. In terms of transparency, organizations must have complete visibility into their software infrastructure. This is another area where SBOMs are key. With an SBOM, developers have full visibility into every element of their software, which reduces the risk of using outdated or vulnerable components that contribute to technical debt. There’s no question that open source software offers unparalleled opportunities for innovation, collaboration and growth within the software development ecosystem. 


Is AI really going to burn the planet?

Trying to understand exactly how energy-intensive the training of datasets is, is even more complex than understanding exactly how big data center GHG sins are. A common “AI is environmentally bad” statistic is that training a large language model like GPT-3 is estimated to use just under 1,300-megawatt hours (MWh) of electricity, about as much power as consumed annually by 130 US homes, or the equivalent of watching 1.63 million hours of Netflix. The source for this stat is AI company Hugging Face, which does seem to have used some real science to arrive at these numbers. It also, to quote a May Hugging Face probe into all this, seems to have proven that "multi-purpose, generative architectures are orders of magnitude more [energy] expensive than task-specific systems for a variety of tasks, even when controlling for the number of model parameters.” It’s important to note that what’s being compared here are task-specific AI runs (optimized, smaller models trained in specific generative AI tasks) and multi-purpose (a machine learning model that should be able to process information from different modalities, including images, videos, and text).



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls

Daily Tech Digest - September 23, 2024

Clear as mud: global rules around AI are starting to take shape but remain a little fuzzy

There is some subjectivity within the EU efforts, as “high risk” is defined as able to cause harm to society, which could receive wildly different interpretations. That said, the effort comes from the right place, which is to protect and ensure the “fundamental rights of EU citizens.” The EU Council views the act as designed to stimulate investment and innovation, while at the same time, carving out exceptions for “military and defense as well as research purposes.” This perspective is not much different from the one the industry offered up in 2022 before the US Senate during discussions on the challenges of security, cybersecurity in the age of AI. At that hearing, two years ago, the Senate was urged not to stifle innovation as adversaries and economic competitors in other nations were not going to be slowing down their innovation. ... When I asked Price for his thoughts on the US position around global AI that many nations should work together to ensure safety without hampering evolution, he agreed that “security considerations must remain at the forefront of these discussions to ensure that widespread AI adoption does not inadvertently amplify cybersecurity risks.”


Turning Compliance Into Strategy: 4 Tips For Navigating AI Regulation

For Chief Strategy Officers (CSOs), helping their organizations to understand and adapt to AI regulation is essential. CSOs can play a key role in guiding their organizations to turn compliance into strategy ... Establish effective governance frameworks that align with the AI Act’s requirements. This framework should include clear policies on data usage, transparency, accountability and ethical AI practices, as well as implementing AI-driven technologies, to help manage risks. Additionally, developing a governance structure that includes roles and responsibilities for AI oversight, and working with operational leaders to embed governance practices into day-to-day business operations can support the company’s long-term success and ethical reputation. ... Companies that form strategic partnerships are better positioned to stay competitive in the market, helping them navigate regulations like the AI Act. By combining the unique strengths of each partner, business leaders can develop more robust and scalable solutions that are better equipped to handle the nuances of regulations. ... The EU AI Act marks a significant shift in the regulatory landscape, challenging businesses to rethink how they develop and deploy AI technologies. 


‘Harvest now, decrypt later’: Why hackers are waiting for quantum computing

The “harvest now, decrypt later” phenomenon in cyberattacks — where attackers steal encrypted information in the hopes they will eventually be able to decrypt it — is becoming common. As quantum computing technology develops, it will only grow more prevalent. ... The average hacker will not be able to get a quantum computer for years — maybe even decades — because they are incredibly costly, resource-intensive, sensitive and prone to errors if they are not kept in ideal conditions. To clarify, these sensitive machines must stay just above absolute zero (459 degrees Fahrenheit to be exact) because thermal noise can interfere with their operations. However, quantum computing technology is advancing daily. Researchers are trying to make these computers smaller, easier to use and more reliable. Soon, they may become accessible enough that the average person can own one. ... The Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST) soon plan to release post-quantum cryptographic standards. The agencies are leveraging the latest techniques to make ciphers quantum computers cannot crack. 


AI-driven demand forecasting ensures we’re ‘game-ready’ by predicting user behaviour and traffic

At Dream Sports, AI and machine learning are central to enhancing user experiences, optimising predictions, and securing our platform. AI-driven demand forecasting ensures we’re “game-ready” by predicting user behaviour and traffic for smooth gameplay during peak times. With over 250 million users, our ML systems safeguard platform integrity, detecting and preventing violations to ensure fair play. We also leverage ML to personalise user experiences, optimise rewards programs, and use causal inference for data-driven decisions across game recommendations and contest management. Generative AI initiatives include developing an AI Coach and enhancing user verification and customer success systems. Our collaboration with Columbia University’s Dream Sports AI Innovation Centre advances AI/ML applications in sports, focusing on predictive modelling, fan engagement, and sports tech optimisation. This partnership, alongside internal initiatives, helps us lead in reshaping sports technology with more immersive, personalised experiences through the rise of generative AI.


5 things your board needs to know about innovation and thought leadership

The most successful organizations have a programmatic approach to managing innovation and thought leadership, which helps them build organizational competency over time in both disciplines. How it’s structured is less important since it can be centralized, decentralized, or hybrid, but having a defined program with a mission, vision, strategy, and operating plan at a minimum is critical. As an example, the US Navy set a vision for 2030 related to the future of naval information warfare, creating a Hollywood-produced video, which became a north star for the organization, unlocking millions in funding for AI. The focus and types of innovation and thought leadership you pursue are important, too. In addition to an internal and client-facing focus, have a known set of innovation enablers you plan to pursue such as data and analytics, automation, adaptability, cloud, digital twins and AI, but be open to adding others as needed. The same is true for your editorial calendar for thought leadership and the topics you plan to address. And hear out new thought leadership topics that may come from left field, which could benefit customers. In addition, keep the board appraised on your multi-year innovation journey, goals and objectives. 


Cloud Security Risk Prioritization is Broken. Here’s How to Fix It.

Business context is critical. It’s easy to understand, for example, a CVE in a payment application is a high priority. Whereas, the same CVE in a search application is low priority. Security programs must also take this into account. Effective security paradigms understand which detected vulnerabilities have the greatest business impact, so security teams aren’t spending time prioritizing lower-risk vulnerabilities. Traditional security applications run tests on code before it’s pushed. While this pre-production testing is still a best practice, it misses how code interacts with the environmental variables, configurations, and sensitive data it will coexist with once deployed. This insight is essential when you’re working to understand how a cloud-native application will function when live. Technologies such as application security posture management (ASPM) facilitate a more proactive approach by automating security review processes in production and creating a live view of an application, its vulnerabilities, and business risks. ASPM provides visibility into what’s happening in the cloud, giving security teams a better understanding of application behavior and attack surfaces so they can prioritize appropriately. 


A Look Inside the World of Ethical Hacking to Benefit Security

While there can be many different siloes and areas of focus within the ethical hacking community, enterprises tend to interact with these experts in a few different ways. Penetration testing is a common connection between enterprises and ethical hackers, often one driven by compliance requirements. Larger, more mature organizations may employ penetration testers internally in addition to contracting with third parties. While many organizations rely solely on third parties. Enterprises may also engage ethical hackers to participate in red teaming exercises, simulations of real-world attacks. Typically, these exercises have a specific objective, and ethical hackers are free to use whatever means available to achieve that objective. Hannan offers a physical security assessment as an example of a red teaming exercise. “Walk into a building, find an unlocked computer, and plug a USB device into the computer,” he details. “That might be one of your objectives. How do you get into the building? Do you impersonate a delivery person? Do you impersonate an HVAC person? Do you just show up in a yellow vest and a hard hat and walk into the building? That's left up to you.”


Offensive cyber operations are more than just attacks

AI is already transforming offensive cyber operations by expanding data visibility and streamlining threat intelligence, which are critical for both defensive and offensive purposes. AI enables faster decision-making and the ability to predict and respond to threats more effectively. However, it also empowers adversaries, allowing for more sophisticated attacks which could include generating deepfakes, designing advanced malware, and spreading misinformation at an unprecedented scale on social media platforms. Quantum computing, while still in its early stages, poses a significant long-term challenge. Its potential to break current encryption methods could render many of today’s cybersecurity practices obsolete, creating new vulnerabilities for exploitation. ... A key limitation is time. Once a threat is identified, the race to harden systems and close vulnerabilities begins. The longer it takes to respond, the more risk organizations face. As threats become more sophisticated, defenders must continuously adapt and anticipate new methods of attack, making speed, agility, and proactive defense critical factors in minimizing exposure and mitigating risk.


Quantum Risks Pose New Threats for US Federal Cybersecurity

Adversaries including China are investing heavily in quantum computing in an apparent effort to outpace the United States, where bureaucratic red tape and unforeseen costs could significantly hinder federal efforts to keep up. "Upgrading this infrastructure isn’t going to be quick or cheap," said Georgianna Shea, chief technologist of the Foundation for Defense of Democracies' Center on Cyber and Technology Innovation. Testing for quantum-resistant encryption could reveal compatibility issues with legacy systems, such as increased power demands, reduced performance, larger key sizes and the need to adjust existing protocols and application stacks for keys and digital signatures, she told Information Security Media Group. The Foundation for Defense of Democracies is set to release new guidance for CIOs on Monday that will aim to lay out a road map for quantum readiness. The report is structured as a six-point plan that includes designating a leader, taking inventory of all encryption systems, prioritizing based on risk, understanding mitigation strategies, developing a transition plan and regularly monitoring and adjusting it as needed.


The Rise of Generative AI Fuels Focus on Data Quality

Traditionally, data quality initiatives have often been isolated efforts, disconnected from core business goals and strategic initiatives. Some data quality initiatives are compliance-focused, data cleaning, or departmental efforts — all are very important but not directly tied to larger business goals. This makes it difficult to quantify the impact of data quality improvements and secure the necessary investment. As a result, data quality struggles to gain the crucial attention it deserves. However, the rise of GenAI presents a game-changer for enterprises. GenAI apps rely heavily on high-quality data to generate accurate and reliable results. ... Organizations need a new way to organize the data and make it GenAI-ready, making sure it is continuously synced with the source systems, continuously cleansed according to a company's data quality policies, and continuously protected. But the solution extends beyond technology. Organizations must prioritize data quality by establishing key performance indicators (KPIs) directly linked to GenAI success, such as customer satisfaction, resolution rate, and response time.



Quote for the day:

“If you want to make a permanent change, stop focusing on the size of your problems and start focusing on the size of you!” -- T. Harv Eker

Daily Tech Digest - September 22, 2024

Cloud Exit: 42% of Companies Move Data Back On-Premises

Agarwal said: ‘Nobody is running a cloud business as a charity.’ When businesses reach a size where it is economically viable, constructing their own infrastructure can save significant costs while eliminating the ‘cloud middleman’ and associated expenses. That said, the cloud is certainly not “Just someone else’s computer,” as the joke goes. It has added immense value to those who adapted to it. But like artificial intelligence (AI), it has been mythologized and exaggerated as the ultimate tool for efficiency — romanticized to the point where pervasive myths about cost-effectiveness, reliability, and security are enough for businesses to dive headfirst into adoption. These myths are frequently discussed in high-profile forums, shaping perceptions that may not always align with reality, leading many to commit without fully considering potential drawbacks and real-world challenges. ... Avoidable charges and cloud waste were another noteworthy issue revealed in the 2023 State of Cloud Strategy Survey by Hashicorp. 94% of respondents in this survey reported incurring unnecessary expenses because of the underutilization of cloud resources. These costs often result from maintaining idle resources that do not cater to any of the company’s actual operational needs. 


Revitalize aging data centers

Before tackling the specifics of upgrading a data center, it is important to conduct a thorough assessment to identify the specific needs and areas for improvement. This assessment should examine the data center's existing infrastructure, including server capacity, storage solutions, and energy consumption. It is also important to evaluate how these elements stack up against current power standards, grid connection requirements, efficiency benchmarks, and environmental and permit regulations. By benchmarking against newer facilities, operators can identify key areas where technological and infrastructural enhancements are needed. ... While integrating the latest server technologies might seem obvious, these systems demand different support from existing infrastructure. The increased computational loads should not compromise system reliability. Therefore, transitioning to newer generations of processors can result in updates of your data center support infrastructure. This includes upgrading power distribution units (PDUs) to handle higher power densities, enhancing network infrastructure to support faster data transfer rates, and reinforcing structural components to accommodate the increased weight and space requirements of modern equipment.


Personhood: Cybersecurity’s next great authentication battle as AI improves

Although intriguing, the personhood plan has fundamental issues. First, credentials are very easily faked by gen AI systems. Second, customers may be hard-pressed to take the significant time and effort to gather documents and wait in line at a government office to prove that they are human simply to visit public websites or sales call centers. Some argue that the mass creation of humanity cookies would create another pivotal cybersecurity weak spot. “What if I get control of the devices that have the humanity cookie on it?” FaceTec’s Meier asks. “The Chinese might then have a billion humanity cookies at one person’s control.” Brian Levine, a managing director for cybersecurity at Ernst & Young, believes that, while such a system might be helpful in the short run, it likely won’t effectively protect enterprises for long. “It’s the same cat-and-mouse game” that cybersecurity vendors have always played with attackers, Levine says. ... Sandy Carielli, a Forrester principal analyst and lead author of the Forrester bot report, says a critical element of any bot defense program is to not delay good bots, such as legitimate search engine spiders, in the quest to block bad ones.“The crux of any bot management system has to be that it never introduces friction for good bots and certainly not for legitimate customers. 


What’s behind the return-to-office demands?

The effect is clear: an average employee wants to work three days a week in the office, while managers want them there four days. The managers win, of course: today half of all civil servants in Stockholm County work in the office four days a week, a clear increase. There are different conclusions one can draw. Mine are these: Physical workplaces and physical interaction are better than digital workspaces and meetings when it comes to creative tasks and social/cultural togetherness. I think, depending on what you work with, employees and managers are quite in agreement. Leadership in the hybrid work models has not developed in the ways and at the pace required. Managers still have an excessive need for control, with no way to deal with this without trying to return to what was previously comfortable. Employees have probably not managed to convey to their bosses the positive aspects of home work — for the employer. It’s great that your life puzzle is easier and you can take power walks and do laundry, but how does that help the company? It’s no wonder that whispering about sneaky vacations is taking off. And there’s an elephant in the room we should talk about — people really hate open office spaces and activity-based workplaces.


Passwordless AND Keyless: The Future of (Privileged) Access Management

Because SSH keys are functionally different from passwords, traditional PAMs don't manage them very well. Legacy PAMs were built to vault passwords, and they try to do the same with keys. Without going into too much detail about key functionality (like public and private keys), vaulting private keys and handing them out at request simply doesn't work. Keys must be secured at the server side, otherwise keeping them under control is a futile effort. Furthermore, your solution needs to discover keys first to manage them. Most PAMs can't. There are also key configuration files and other key(!) elements involved that traditional PAMs miss. ... Let's come back to the topic of passwords. Even if you have them vaulted, you aren't managing them in the best possible way. Modern, dynamic environments - using in-house or hosted cloud servers, containers, or Kubernetes orchestration - don't work well with vaults or with PAMs that were built 20 years ago. This is why we offer modern ephemeral access where the secrets needed to access a target are granted just-in-time for the session, and they automatically expire once the authentication is done. This leaves no passwords or keys to manage - at all.


Cybersecurity is Beyond Protecting Personal Data

Cyberattacks are not just about stealing personal data; they also involve stealing intellectual property and sensitive corporate information. In India, the number of data breaches has surged in recent years. The Indian Computer Emergency Response Team (CERT-IN) reported over 150,000 cyber incidents in 2023 alone, with significant breaches occurring in sectors such as finance, healthcare, and government. ... While there is a global scarcity of competent cybersecurity personnel, India is experiencing an exceptionally severe shortfall. A report conducted by (ISC)² indicates that there is a 3 million cybersecurity workforce shortage worldwide, with India contributing significantly to this shortfall. This deficiency hinders businesses' capacity to detect and address cyber threats that should be looked after by team members' ignorance and lack of training might lead to human mistakes, which are a common way for cyberattacks to get started. ... Compliance with cybersecurity legislation and standards is critical for data protection and retaining confidence. India's legal landscape is changing, with initiatives like the Information Technology Act and the Personal Data Protection Bill aimed at improving cybersecurity. 


Google calls for halting use of WHOIS for TLS domain verifications

TLS certificates are the cryptographic credentials that underpin HTTPS connections, a critical component of online communications verifying that a server belongs to a trusted entity and encrypts all traffic passing between it and an end user. ... The rules for how certificates are issued and the process for verifying the rightful owner of a domain are left to the CA/Browser Forum. One "base requirement rule" allows CAs to send an email to an address listed in the WHOIS record for the domain being applied for. When the receiver clicks an enclosed link, the certificate is automatically approved. ... Specifically, watchTowr researchers were able to receive a verification link for any domain ending in .mobi, including ones they didn’t own. The researchers did this by deploying a fake WHOIS server and populating it with fake records. Creation of the fake server was possible because dotmobiregistry.net—the previous domain hosting the WHOIS server for .mobi domains—was allowed to expire after the server was relocated to a new domain. watchTowr researchers registered the domain, set up the imposter WHOIS server, and found that CAs continued to rely on it to verify ownership of .mobi domains.


How API Security Fits into DORA Compliance: Everything You Need to Know

Financial institutions rely heavily on third-party service providers, and APIs are the gateway through which many of these vendors access core banking systems. This introduces significant risk, as third-party APIs may become the weakest link in the supply chain. DORA places substantial emphasis on managing these risks, as outlined in Article 28, stating that financial entities must ensure that third-party providers “implement and maintain appropriate measures to manage ICT risks" and that institutions must "ensure the quality and integration of ICT services provided by third parties." You need to start simple and to be able to answer two questions: Who are your vendors? What third-party apps do you have connected? One of the biggest challenges here is the concept of shadow APIs—those untracked, unauthorized, or forgotten endpoints that can remain active long after their intended purpose. Shadow APIs expose financial institutions to vulnerabilities, making it difficult to track and control third-party access. DORA’s Article 28 further reinforces the need for financial institutions to "assess third-party ICT service providers’ ability to protect the integrity, security, and confidentiality of data, and to manage risks related to outsourcing."


Dirty code still runs, and that’s not a good thing

Quality code benefits developers by minimizing the time and effort spent on patching and refactoring later. Having confidence that code is clean also enhances collaboration, allowing developers to more easily reuse code from colleagues or AI tools. This not only simplifies their work but also reduces the need for retroactive fixes and helps prevent and lower technical debt. To deliver clean code, it’s important to note that developers should start with the right guardrails, tests, and analysis from the beginning, in the IDE. Pairing unit testing with static analysis can also guarantee quality. The sooner these reviews happen in the development process, the better. ... Developers and businesses can’t afford to perpetuate the cycle of bad code and, consequently, subpar software. Pushing poor-quality code through to development will only reintroduce software that breaks down later, even if it seems to run fine in the interim. To end the cycle, developers must deliver software built on clean code before deploying it. By implementing effective reviews and tests that gatekeep bad code before it becomes a major problem, developers can better equip themselves to deliver software with both functionality and longevity. 


The Perfect Balance: Merging AI and Design Thinking for Innovative Pricing Strategies

This combination of AI’s optimization and Design Thinking’s creative transformation is exactly what modern businesses need to stay competitive. Relying solely on AI to adjust pricing may lead to efficiency gains, but without the innovation brought by Design Thinking, businesses risk missing out on new opportunities to reshape their pricing models and align them more closely with customer needs. Conversely, while Design Thinking can spark innovation, without AI’s precision, companies might struggle to implement their ideas in a way that maximizes profitability. It is by uniting these two approaches that organizations can build pricing strategies that are both efficient and forward-looking. For businesses, pricing is a powerful lever that influences profitability, market position, and customer perception. In today’s competitive landscape, those that fail to leverage both AI and Design Thinking risk falling behind. AI offers the operational benefits of real-time optimization, driving immediate financial returns. Design Thinking provides the creative space to explore new value propositions and pricing structures that can secure long-term customer loyalty. 



Quote for the day:

"A sense of humor is part of the art of leadership, of getting along with people, of getting things done." -- Dwight D. Eisenhower

Daily Tech Digest - September 21, 2024

Quantinuum Scientists Successfully Teleport Logical Qubit With Fault Tolerance And Fidelity

This research advances quantum computing by making teleportation a reliable tool for quantum systems. Teleportation is essential in quantum algorithms and network designs, particularly in systems where moving qubits physically is difficult or impossible. By implementing teleportation in a fault-tolerant manner, Quantinuum’s research brings the field closer to practical, large-scale quantum computing systems. The fidelity of the teleportation also suggests that future quantum networks could reliably transmit quantum states over long distances, enabling new forms of secure communication and distributed quantum computing. The use of QEC in these experiments is especially promising, as error correction is one of the key challenges in making quantum computing scalable. Without fault tolerance, quantum states are prone to errors caused by environmental noise, making complex computations unreliable. The fact that Quantinuum achieved high fidelity using real-time QEC demonstrates the increasing maturity of its hardware and software systems.


Adversarial attacks on AI models are rising: what should you do now?

Adversarial attacks on ML models look to exploit gaps by intentionally attempting to redirect the model with inputs, corrupted data, jailbreak prompts and by hiding malicious commands in images loaded back into a model for analysis. Attackers fine-tune adversarial attacks to make models deliver false predictions and classifications, producing the wrong output. ... Disrupting entire networks with adversarial ML attacks is the stealth attack strategy nation-states are betting on to disrupt their adversaries’ infrastructure, which will have a cascading effect across supply chains. The 2024 Annual Threat Assessment of the U.S. Intelligence Community provides a sobering look at how important it is to protect networks from adversarial ML model attacks and why businesses need to consider better securing their private networks against adversarial ML attacks. ... Machine learning models can be manipulated without adversarial training. Adversarial training uses adverse examples and significantly strengthens a model’s defenses. Researchers say adversarial training improves robustness but requires longer training times and may trade accuracy for resilience.


4 ways to become a more effective business leader

Delivering quantitative results isn't the only factor that defines effective leaders -- great managers also possess the right qualitative skills, including being able to communicate and collaborate with their peers. "Once you reach that higher level in the business, particularly if you are part of the executive committee, you need to know how to deal with corporate politics," said Vogel. Managers must recognize that underlying corporate politics can be made with social motivations in mind. Great leaders see the signs. "If you're unable to read the room and understand and navigate that context, it's going to be tough," said Vogel. ... The rapid pace of change in modern organizations represents a huge challenge for all business leaders. Vogel instructed would-be executives to keep learning. "Especially at the moment, and the world we work in, you need to upskill yourself," she said. "We have had so much change happening in the business."Vogel said technology is a key factor in the rapid pace of change. The past two years have seen huge demands for Gen AI and machine learning. In the future, technological innovations around blockchain, quantum computing, and robotics will lead to more pressure for digital transformation.


Cloud architects: Try thinking like a CFO

Cloud architects must cut through the hype and focus on real-world applications and benefits. More than mere technological enhancement is required; architects must make a clear financial case. This is particularly apt in environments where executive decision-makers demand justification for every technology dollar spent. Aligning cloud architecture strategies with business outcomes requires architects to step beyond traditional roles and strategically engage with critical financial metrics. For example, reducing operational expenses through efficient cloud resource management will directly impact a company’s bottom line. A successful cloud architect will provide CFOs with predictive analytics and cost-saving projections, demonstrating clear business value and market advantage. Moreover, the increasing pressure on businesses to operate sustainably allows architects to leverage the cloud’s potential for greener operations. These are often strategic wins that CFOs can directly appreciate in terms of corporate financial and social governance metrics. However, when I bring up the topic of sustainability, I receive a lot of nods, but few people seem to care. 


Wherever There's Ransomware, There's Service Account Compromise. Are You Protected?

Most service accounts are created to access other machines. That inevitably implies that they have the required access privileges to log-in and execute code on these machines. This is exactly what threat actors are after, as compromising these accounts would render them the ability to access and execute their malicious payload. ... Some service accounts, especially those that are associated with an installed on-prem software, are known to the IT and IAM staff. However, many are created ad-hoc by IT and identity personnel with no documentation. This makes the task of maintaining a monitored inventory of service accounts close to impossible. This plays well in attackers' hands as compromising and abusing an unmonitored account has a far greater chance of going undetected by the attack's victim. ... The common security measures that are used for the prevention of account compromise are MFA and PAM. MFA can't be applied to service accounts because they are not human and don't own a phone, hardware token, or any other additional factor that can be used to verify their identity beyond their username and passwords. PAM solutions also struggle with the protection of service accounts.


Datacenters bleed watts and cash – all because they're afraid to flip a switch

The good news is CPU vendors have developed all manner of techniques for managing power and performance over the years. Many of these are rooted in mobile applications, where energy consumption is a far more important metric than in the datacenter. According to Uptime, these controls can have a major impact on system power consumption and don't necessarily have to kneecap the chip by limiting its peak performance. The most power efficient of these regimes, according to Uptime, are software-based controls, which have the potential to cut system power consumption by anywhere from 25 to 50 percent – depending on how sophisticated the operating system governor and power plan are. However, these software-level controls also have the potential to impart the biggest latency hit. This potentially makes these controls impractical for bursty or latency-sensitive jobs. By comparison, Uptime found that hardware-only implementations designed to set performance targets tend to be far faster when switching between states – which means a lower latency hit. The trade-off is the power savings aren't nearly as impressive, topping out around ten percent.


An AI-Driven Approach to Risk-Scoring Systems in Cybersecurity

The integration of AI into risk-scoring systems also enhances the overall security strategy of an organization. These systems are not static, but rather learn and adapt over time, becoming increasingly effective as they encounter new threat patterns and scenarios. This adaptive capability is crucial in the face of rapidly evolving cyber threats, allowing organizations to stay one step ahead of potential attackers. An example of this in action is detecting anomalies during user sign-on by analyzing physical attributes and comparing them to typical behavior patterns. ... It's important, however, to realize that AI is not a cure-all for every cybersecurity challenge. The most impactful strategies combine the analytical power of AI with human expertise. While AI excels at processing vast amounts of data and identifying patterns, human analysts provide critical contextual understanding and decision-making capabilities. It's crucial for AI systems to continuously learn from the input of small and medium-sized enterprises (SMEs) through a feedback loop to refine their accuracy and minimize alert fatigue; this collaboration between human and artificial intelligence creates a robust defense against a wide range of cyber threats.


API Security in Financial Services: Navigating Regulatory and Operational Challenges

API breaches can have devastating consequences, including data loss, brand damage, financial losses, and customer attrition. For example, a breach that exposes customer account information can lead to financial theft and identity fraud. The reputational damage from such incidents can result in loss of customer trust and increased scrutiny from regulators. Institutions must recognize the potential fallout from breaches and take proactive steps to mitigate these risks, understanding that the cost of breaches often far exceeds the investment in robust security measures. ... Common security controls such as encryption, data loss prevention, and web application firewalls are widely used, yet their effectiveness remains limited. The report indicates that 45% of financial institutions can only prevent half or fewer API attacks, underscoring the need for improved security strategies and tools. Encryption, while essential, only protects data at rest and in transit, leaving APIs vulnerable to other types of attacks like injection and denial-of-service. Further, data loss prevention systems often struggle to keep pace with the volume and complexity of API traffic.


Guide To Navigating the Legal Perils After a Cyber Incident

Cyber incidents pose significant technical challenges, but the real storm often hits after the breach gets contained, Nall said. That’s when regulators step in to scrutinize every decision made in the heat of the crisis. While scrutiny has traditionally focused on corporate leadership or legal departments, today, infosec workers risk facing charges of fraud, negligence, or worse, simply for doing their jobs. ... Instead of clear, universal cybersecurity standards, regulatory bodies like the SEC only define acceptable practices after a breach occurs, Nall said. This reactive approach puts CISOs and other infosec workers at a distinct disadvantage. "Federal prosecutors and SEC attorneys read the paper like anyone else, and when they see bad things happening, like major breaches, especially where there is a delay in disclosure, they have to go after those companies," Nall explained during her presentation. ... Fortunately, CISOs and other infosec workers can take several concrete steps to protect their careers and reputations. By implementing airtight communication practices and negotiating solid legal protections, they can navigate the fallout of a disastrous cyber incident. 


As the AI Bubble Deflates, the Ethics of Hype Are in the Spotlight

One of the major problems we’re seeing right now in the AI industry is the overpromising of what AI tools can actually do. There’s a huge amount of excitement around AI’s observational capacities, or the notion that AI can see things that are otherwise unobservable to the human eye due to these tools’ ability to discern trends from huge amounts of data. However, these observational capacities are not only overstated, but also often completely misleading. They lead to AI being attributed almost magical powers, whereas in reality a large number of AI products grossly underperform compared to what they’re promised to do. ... So, the true believers caught up in the promises and excitement are likely to be disappointed. But throughout the hype cycle, many notable figures including practitioners and researchers have challenged narratives about the unconstrained transformational potential of AI. Some have expressed alarm at the mechanisms, techniques, and behavior at play which allowed such unbridled fervour to override the healthy caution necessary ahead of the deployment of any emerging technology, especially one which has the potential for such massive societal, environmental, and social upheaval.



Quote for the day:

“Start each day with a positive thought and a grateful heart.” -- Roy T. Bennett

Daily Tech Digest - September 20, 2024

The New Normal in Disaster Recovery: Preparing for Ransomware Attacks Takes a New Approach

Early detection of ransomware can be difficult due to sophisticated malware that operates stealthily, attacks occurring outside business hours, and the scale of large, complex networks. Rapid containment prevents further spread but requires quick decision-making to isolate systems without disrupting critical operations. Tracing the initial point of entry and identifying all compromised systems is complex and time-consuming but essential to prevent reinfection. Isolated recovery environments (IREs) or cleanrooms provide secure, isolated environments for data recovery and system rebuilding, designed to prevent reinfection during the recovery process. ... To protect against data loss, organizations of all types need to implement immutable and air-gapped backups using write-once-read-many (WORM) technology and physically or logically isolating backup systems from the main network. Increasing backup frequency and redundancy is also advised, along with diversifying backup storage and maintaining multiple versions of backups with appropriate retention policies.


Big Tech criticises EU’s AI regulation – is it justified?

An open letter singed by various Big Tech leaders – including Patrick Collison and Meta’s Mark Zuckerberg – claims Europe is becoming less competitive and innovative than other regions due to “inconsistent regulatory decision making”. This letter follows a report from former Italian prime minister Mario Draghi, which called for an annual spending boost of €800bn to prevent a “slow and agonising decline” economically. But the Big Tech warning also follows issues for these companies to train their AI models with the data of EU citizens using their services. ... But the letter also says the EU’s current regulation means the bloc risks missing out on “open” AI models and the latest “multimodal” models that can operate across text, images and speech. The letter says companies are going to invest heavily into AI models for European citizens, then they need “clear rules” that enable the use of European data. “But in recent times, regulatory decision making has become fragmented and unpredictable, while interventions by the European Data Protection Authorities have created huge uncertainty about what kinds of data can be used to train AI models,” the letter reads. 


Innovation: What is next?

Innovations in technology that prioritize environmental sustainability may offer potential solutions. However, the solution is not as straightforward as depending solely on temporary fixes and implementing a small number of innovative strategies. The analysis shows India’s green technology potential and innovation, particularly in wind, solar, geothermal, ocean, hydro, biomass, and waste energy. However, patenting activity has plateaued in recent years, indicating the need for a strategic approach to green technology innovation in India. ... Increasing private sector investment confidence and working with industry and universities can also make big changes. Moreover, through the strategic utilization of geo-political advantages and the establishment of a vibrant and cooperative environment, India has the potential to significantly advance its green technology industry and make substantial contributions to international endeavors aimed at addressing climate change, all the while promoting economic development. ... Further, deep-tech innovation and a focus on product creation in underserved markets can turn out to be a game changer for India. According to Nasscom, the start-up ecosystem will add 250 scale- ups in tech, logistics, automotive, fintech, and health tech by 2025.


What Lawyers Want You to Know About NFTs

"To avoid legal trouble, sellers of NFTs should make sure that they either own the copyright in the work of art associated with the NFT, or that they have the permission of the copyright owner to make and sell NFTs of the artwork,” says Tyler Ochoa, professor of law at Santa Clara University School of Law. “They should also avoid incorporating any other works of art or any trademarks that are owned by others. And if more than one person is involved in the project, such as an artist and an entrepreneur, they should clearly specify the rights and responsibilities of all parties to the project, and the division of any profits, in a signed, written agreement.” ... Trademark infringement is another significant concern. The Wright Law Firm’s Wright says as illustrated in Hermès Int'l v. Rothschild, the creation and sale of "MetaBirkins" NFTs, which depicted faux-fur versions of Hermès' Birkin handbags, led to claims of trademark infringement, trademark dilution, and cybersquatting. “[The Hermes Int’l v. Rothchild] case underscores the potential for NFTs to infringe on existing trademarks, especially when they replicate or closely imitate well-known brands without authorization,” says Wright. 


3 API Vulnerabilities Developers Accidentally Create

The problem with APIs isn’t so much that they’re hard to secure, but that they are prolific and developers prioritize other tasks to testing and securing APIs, she added. There are literally hundreds and thousands of API endpoints, so it’s not surprising things get missed. ... But it’s also an IT cultural problem that creates security problems. “At the end of the day, any developer is going to value breaking down their product backlog and their sprint backlog more than fixing vulnerabilities, because in the sprint, even in the waterfall model of software engineering, the functionality is on completing features to get a complete product,” Paxton-Fear said. “Fixing bugs isn’t given the same priority. And this is how things get forgotten.” Instead, there needs to be basic internal reviews where finding vulnerabilities is prioritized. And security can’t be the Department of No, because that ends up in conflict with developers instead of solving security problems. And IT organizations have to stop prioritizing speed over security. “While you can get a solution that can really help you manage it, if you don’t have the the teamwork and the culture around security, it’s going to fail, just like anything else will,” she said.


What is pretexting? Definition, examples, and attacks

There are two main elements to a pretext: a character, played by the scam artist; and a plausible situation, in which the character needs or has a right to specific information. For instance, because errors can arise with automatic payment systems, it’s plausible that a recurring bill payment we’ve set up might mysteriously fail, prompting the company we owe to reach out as a result. An attacker taking on the character of a helpful customer service rep reaching out to help us fix the error might ask for bank or credit card information as the scenario plays out to gain the information necessary to steal money from our accounts. ... Often lumped under the heading pretexting, tailgating is a common technique for getting through a locked door by simply following someone who can open it inside before it closes. It can be considered pretexting because the tailgater often adopts a persona that encourages the person with the key to let them into the building — for instance, by wearing a jumpsuit and claiming they’re there to fix the plumbing, or by carrying a pizza box they say must be delivered to another floor. 


Post-Digital Transformation: How to Evolve Beyond Initial Tech Adoption

Digital transformation often brings a cultural shift, as companies adopt new technologies that change how they operate. However, many organizations stop short of building a fully agile and adaptable culture. In a post-digital world, agility becomes a crucial differentiator. Technology is evolving faster than ever, and customer expectations are constantly changing. Businesses need to foster a culture where rapid experimentation, quick decision-making, and the ability to pivot are embedded in daily operations. This culture must extend across the entire organization, from leadership to frontline employees. To do this, companies can adopt agile methodologies, break down silos between departments and encourage cross-functional teams to collaborate. By creating an environment where employees are empowered to innovate and experiment without fear of failure, businesses can stay ahead of the curve. ... One of the most significant outcomes of digital transformation is the wealth of data that businesses now have access to. But collecting data is not enough—companies must be able to turn that data into actionable insights.


The AI Threat: Deepfake or Deep Fake? Unraveling the True Security Risks

AI-produced deepfakes and AI-improved phishing are a bigger problem. Deepfakes come in two varieties: voice and image/video; both of which are now rapidly improving commodity outputs from readily available gen-AI models – and neither of which is easy to detect by either humans or technology. ... The security industry is not waiting for the dam to break. There have been numerous new startups in 2024 all working on their own solution on how to detect AI and deepfake attacks, while existing firms have refocused on deepfake detection. Pindrop is an example of the latter. In July 2024, it raised $100 million in debt financing primarily to develop additional tools able to detect deepfake voice attacks. Deepfake voice is the easiest deepfake to produce, the most employed, and the easiest to detect. This is because there are subtle audible clues that a voice is not human generated that can be detected by technology if not by the human ear. The danger exists where that detection technology is not being used. The same can be said for the current generation of AI-enhanced polymorphic malware detection systems: they can work, but only where they are being used.


Traditional CX on Deathbed as AI Agents Thrive

AI agents are an indispensable part of modern CX strategies, enabling real-time personalization, proactive engagement and outcome tracking. This shift toward automation is key to reducing operational costs as AI agents are made to handle tasks such as ticket routing, knowledge base management and first-contact resolutions. Eighty-six percent of CX leaders predicted that CX will be "utterly transformed" over the next three years. Human agents will be able to pick complex conversations from an AI agent, who will already have the details regarding the issue, and the customer will no longer need to repeat themselves. AI will instead act as their copilot, shifting human roles toward "expertise-based work, away from routine tasks." Recognizing the evolving trend, Salesforce, a leader in AI integration, has introduced Agentforce, a "proactive, autonomous application that provides specialized, always-on support to employees or customers." Agentforce uses machine learning to deploy autonomous bots for routine customer service tasks. With AI agents, the company aligns its customer service efforts with business outcomes such as increased sales conversions or customer retention, which is directly tied to pricing.


Striking the balance between cybersecurity and operational efficiency

Security supports the business, the controls are aligned and make perfect sense, their implementation is smooth, they are behind the scenes, and you can always get help quickly. In case of an accident, you can move to either the left, or the right, so you actually have more options than on any of the other lanes, so this is quite flexible as well. You can see where I am going with this, right? Similarly you need to be flexible with your cybersecurity strategy – develop your long term strategy, and start executing it – but use tactics to do so – when it aligns well with a business opportunity, the chances to succeed are far greater than when to do so during the middle of a business disruption. Learn to leverage the upcoming situations as great opportunities for your long-term advancement of the security strategy. ... It is important to understand that there are plenty such frameworks, and guidelines – just imagine in a short blast: ISO27XXX, NIST-800-XXX, NIST CSF, CIS, COBIT, COSO, ITIL, PCI, OWASP, plus a plethora of others, plus all the regulations. Further, the majority of these frameworks are quite similar when you actually break them down, with quite some overlap, but also serious gaps otherwise. 



Quote for the day:

"The mediocre leader tells. The good leader explains. The superior leader demonstrates. The great leader inspires." -- Gary Patton

Daily Tech Digest - September 19, 2024

AI, Software Architecture and New Kinds of Products

Although AI won’t change the practice of software architecture, AI will make a big change in what software architects architect. The first generation of AI-enabled applications will be similar to what we have now. For example, integrating generative AI into word processing and spreadsheet applications (as Microsoft and Google are doing) or tools for AI-assisted programming (like GitHub Copilot and others). But before long, we will be building different kinds of software. ... Architects will also play a role in evaluating an AI’s performance. Evals determine whether the application’s performance is acceptable. But what does “acceptable” mean in the application’s context? That’s an architectural question. In an autonomous vehicle, misidentifying a pedestrian isn’t acceptable; picking a suboptimal route is tolerable. In a recommendation engine, poor recommendations aren’t a problem as long as a reasonable number are good. What’s “reasonable”? That’s an architectural question. Evals also give us our first glimpses of what running the application in production will be like. Is the performance acceptable? Is the cost of running it acceptable? 


The AI Power Paradox

Wellise notes that AI technologies may also help data centers to manage their energy consumption by monitoring environmental conditions and adjusting use of resources accordingly. “In one of our Frankfurt data centers, we deployed the use of AI to create digital twins where we model data associated with climate parameters,” he explains. AI can also help tech companies that operate data centers in different areas to allocate their resources according to the availability of renewables. If it is sunny in California and solar energy is available to a data center there, models can ramp up their training at that location and ramp it down in cloudy Virginia, Demeo says. “Just because they’re there doesn't mean they have to run at full capacity.” Data center customers, too, can have an impact. They can stipulate that they will only use a data center’s services under certain circumstances. “They will use your data center only until a certain price. Beyond that, they will not use it,” Chaudhuri relates. Though application of even the most moderate of these technologies is not yet widespread, advocates claim that these experimental setups may eventually be more widely applicable.


Quantinuum Unveils First Contribution Toward Responsible AI

This research has significant implications for the future of AI and quantum computing. One of the most notable outcomes is the potential to use quantum AI for interpretable models. In current large language models (LLMs) like GPT-4, decisions are often made in a “black box” fashion, making it difficult for researchers to understand how or why certain outputs are generated. In contrast, the QDisCoCirc model allows researchers to inspect the internal quantum states and the relationships between different words or sentences, providing insights into the decision-making process. In practical terms, this could have wide-reaching applications in areas such as question answering systems, also referred to as ‘classification’ challenges, where understanding how a machine reaches a conclusion is as important as the answer itself. By offering an interpretable approach, quantum AI using compositional methods, could be applied in fields like legal, medical, and financial sectors, where accountability and transparency in AI systems are critical. The study also showed that compositional generalization—the ability of the model to generalize from smaller training sets to larger and more complex inputs—was successful.


Differential privacy in AI: A solution creating more problems for developers?

Differential privacy protects personal data by adding random noise, making it harder to identify individuals while preserving the dataset. The fundamental concept revolves around a parameter, epsilon (ε), which acts as a privacy knob. A lower epsilon value results in stronger privacy but adds more noise, which in turn reduces the usefulness of the data. A developer at a major fintech company recently voiced frustration over differential privacy’s effect on their fraud detection system, which needs to detect tiny anomalies in transaction data. “When noise is added to protect user data,” they explained, “those subtle signals disappear, making our model far less effective.” Fraud detection thrives on spotting minute deviations, and differential privacy easily masks these critical details. The stakes are even higher in healthcare. For instance, AI models used for breast cancer detection rely on fine patterns in medical images. Adding noise to protect privacy can blur these patterns, potentially leading to misdiagnoses. This isn’t just a technical inconvenience—it can put lives at risk.


Thinking of building your own AI agents? Don’t do it, advisors say

Large companies may be tempted to roll their own highly customized agents, he says, but they can get tripped up by fragmented internal data, by underestimating the resources needed, and by lacking in-house expertise. “While some companies may achieve success, it’s common for these projects to spiral out of control in terms of cost and complexity,” Ackerson says. “In many cases, buying a solution from a trusted partner can help organizations avoid the pitfalls of builder’s remorse and accelerate their path to success.” AlphaSense has trained its own AI agents, but many companies lack internal expertise, he says. In addition, organizations may project the development costs but ignore the cost of ongoing maintenance, he adds. “This is the largest cost, as maintaining AI systems over time can be complex and resource-intensive, requiring constant updates, monitoring, and optimization to ensure long-term functionality,” Ackerson says. Partnering with an AI provider can give companies access to proven, ready-made agents that have been tested and refined by thousands of users, he contends. “It’s faster to implement, less resource-intensive, and comes with the added benefit of ongoing updates and support — freeing companies to focus on other critical areas of their business,” he says.


Building an Enterprise Data Strategy: What, Why, How

After completing the assessment of your current data management efforts and defining your objectives and priorities, you can begin to assemble the data governance framework by defining roles, responsibilities, and procedures for the entire data lifecycle. This includes data ownership, access controls, security, and compliance as well as data consistency, accountability, and integrity. The next step is to establish the processes and tools used to manage data quality, which include data profiling, cleansing, standardization, and validation. Determine the mechanisms for integrating data to create a seamless and coherent data environment encompassing the entire enterprise. Data lifecycle management covers data retention policies, archival storage, and data purging to ensure efficient storage management. The glue that keeps the many moving pieces of an enterprise data strategy working together harmoniously is your company’s culture of data literacy and empowerment. Employees and managers must be trained to recognize the value of data to the organization, and the importance of maintaining its quality and security. 


Beware the Great AI Bubble Popping

This does not mean that the technology will never make money. Early stages of evolution in any tech usually involve trying products in the market by making them as accessible as possible and monetizing the solutions when there's clarity on use cases, sizable adoption, dependency and demand. Generative AI will take a while longer to get there. The Great Popping will also lead to the ecosystem thinning. Startups with speculative or unsustainable business models will shutter shop as funding decreases. The most likely future scenario is that the AI landscape will shift to make room for a small number of long-term players that focus on practical applications, while the rest go bust. Despite sharing similarities with the dot-com bubble, the residue of the AI one will likely differ in that entire companies, especially the OpenAIs and the Anthropics, won't likely shutter completely. They may close down money-guzzling units, rejigger focus or even pivot entirely, but they are unlikely to vanish off the face of the earth as their dot-com counterparts did. Job losses are a likely inevitability, and few firms will hire the laid-off employees. 


Why Jensen Huang and Marc Benioff see ‘gigantic’ opportunity for agentic AI

In the future, Huang noted, there will be AI agents that understand subtleties and that can reason and collaborate. They’ll be able to find other agents to “work together, assemble together,” while also talking to humans and soliciting feedback to improve their dialogue and outputs. Some will be “excellent” at particular skills, while others will be more general purpose, he noted. “We’ll have agents working with agents, agents working with us,” said Huang. “We’re going to supercharge the ever-loving daylights of our company. We’re going to come to work and a bunch of work we didn’t even realize needed to be done will be done.” Adoption needs to be demystified, he and Benioff agreed, with Huang noting that “it’s going to be a lot more like onboarding employees.” Benioff, for his part, underscored the importance of people being able to “actually understand” how they work and their purpose, and “need to get their hands in the soil.” ... Huang pointed out that the challenges we have in front of us are “many.” Some of these include fine-tuning and guardrailing, but scientists are making advancements in these areas every day. 


Navigating a Security Incident – Best Practices for Engaging Service Providers

Organizations experiencing a security incident must grapple with numerous competing issues simultaneously, usually under a very tight timeframe and the pressure of significant business disruption. Engaging qualified service providers is often critical to successfully resolving and minimizing the fall-out of the incident. These providers include forensic firms, public relations firms, restoration experts, and notification and call center vendors. Due to the nature of these services, they can have access to or even generate additional personal and sensitive information relevant to the incident. Protecting this information from third party or unauthorized disclosures during litigation, discovery, or otherwise, via the application of attorney-client privilege and the work product doctrine is essential. While there is no bright-line, uniform rule about how and under what circumstances these privileges attach to forensic reports and other information prepared by incident response providers, recent case law offers guidance as to how organizations can maximize the prospect that their assessments will remain shielded by the work product doctrine and/or the attorney-client privilege.


AWS claims customers are packing bags and heading back on-prem

You read that correctly – customers are finding that moving their IT back on-premises is so attractive compared with remaining on AWS that they are prepared to do this despite the significant effort involved. Hardly a resounding endorsement of the benefits of the cloud. AWS also says that customers may switch back to on-premises for a number of reasons, including "to reallocate their own internal finances, adjust their access to technology and increase the ownership of their resources, data and security." In fact, there have been a growing number of cases of companies moving some or even all their workloads back from the cloud – so-called cloud repatriation – and cost often seems to be a factor. ... Andrew Buss, IDC senior research director for EMEA, told The Register that while cloud repatriation is becoming more common, "we'd put the share of companies actively repatriating public cloud workloads in the single digit percentage sphere." Organizations are more likely to move to another public cloud provider if the incumbent is not meeting their needs, he said, and they have got more used to the cost economics of public cloud and can compare it to the long-term costs of running private IT infrastructure.



Quote for the day:

"Without initiative, leaders are simply workers in leadership positions." -- Bo Bennett