Daily Tech Digest - November 20, 2024

5 Steps To Cross the Operational Chasm in Incident Management

A siloed approach to incident management slows down decision-making and harms cross-team communication during incidents. Instead, organizations must cultivate a cross-functional culture where all team members are able to collaborate seamlessly. Cross-functional collaboration ensures that incident response plans are comprehensive and account for the insights and expertise contained within specific teams. This communication can be expedited with the support of AI tools to summarize information and draft messages, as well as the use of automation for sharing regular updates. ... An important step in developing a proactive incident management strategy is conducting post-incident reviews. When incidents are resolved, teams are often so busy that they are forced to move on without examining the contributing factors or identifying where processes can be improved. Conducting blameless reviews after significant incidents — and ideally every incident — is crucial for continuously and iteratively improving the systems in which incidents occur. This should cover both the technological and human aspects. Reviews must be thorough and uncover process flaws, training gaps or system vulnerabilities to improve incident management.


How to transform your architecture review board

A modernized approach to architecture review boards should start with establishing a partnership, building trust, and seeking collaboration between business leaders, devops teams, and compliance functions. Everyone in the organization uses technology, and many leverage platforms that extend the boundaries of architecture. Winbush suggests that devops teams must also extend their collaboration to include enterprise architects and review boards. “Don’t see ARBs as roadblocks, and treat them as a trusted team that provides much-needed insight to protect the team and the business,” he suggests. ... “Architectural review boards remain important in agile environments but must evolve beyond manual processes, such as interviews with practitioners and conventional tools that hinder engineering velocity,” says Moti Rafalin, CEO and co-founder of vFunction. “To improve development and support innovation, ARBs should embrace AI-driven tools to visualize, document, and analyze architecture in real-time, streamline routine tasks, and govern app development to reduce complexity.” ... “Architectural observability and governance represent a paradigm shift, enabling proactive management of architecture and allowing architects to set guardrails for development to prevent microservices sprawl and resulting complexity,” adds Rafalin.


Business Internet Security: Everything You Need to Consider

Each device on your business’s network, from computers to mobile phones, represents a potential point of entry for hackers. Treat connected devices as a door to your Wi-Fi networks, ensuring each one is secure enough to protect the entire structure. ... Software updates often include vital security patches that address identified vulnerabilities. Delaying updates on your security software is like ignoring a leaky roof; if left unattended, it will only get worse. Patch management and regularly updating all software on all your devices, including antivirus software and operating systems, will minimize the risk of exploitation. ... With cyber threats continuing to evolve and become more sophisticated, businesses can never be complacent about internet security and protecting their private network and data. Taking proactive steps toward securing your digital infrastructure and safeguarding sensitive data is a critical business decision. Prioritizing robust internet security measures safeguards your small business and ensures you’re well-equipped to face whatever kind of threat may come your way. While implementing these security measures may seem daunting, partnering with the right internet service provider like Optimum can give you a head start on your cybersecurity journey.


How Google Cloud’s Information Security Chief Is Preparing For AI Attackers

To build out his team, Venables added key veterans of the security industry, including Taylor Lehmann, who led security engineering teams for the Americas at Amazon Web Services, and MK Palmore, a former FBI agent and field security officer at Palo Alto Networks. “You need to have folks on board who understand that security narrative and can go toe-to-toe and explain it to CIOs and CISOs,” Palmore told Forbes. “Our team specializes in having those conversations, those workshops, those direct interactions with customers.” ... Generally, a “CISO is going to meet with a very small subset of their clients,” said Charlie Winckless, senior director analyst on Gartner's Digital Workplace Security team. “But the ability to generate guidance on using Google Cloud from the office of the CISO, and make that widely available, is incredibly important.” Google is trying to do just that. Last summer, Venables co-led the development of Google’s Secure AI Framework, or SAIF, a set of guidelines and best practices for security professionals to safeguard their AI initiatives. It’s based on six core principles, including making sure organizations have automated defense tools to keep pace with new and existing security threats, and putting policies in place that make it faster for companies to get user feedback on newly deployed AI tools.


11 ways to ensure IT-business alignment

A key way to facilitate alignment is to become agile enough to stay ahead of the curve, and be adaptive to change, Bragg advises. The CIO should also speak early when sensing a possible business course deviation. “A modern digital corporation requires IT to be a good partner in driving to the future rather than dwelling on a stable state.” IT leaders also need to be agile enough to drive and support change, communicate effectively, and be transparent about current projects and initiatives. ... To build strong ties, IT leaders must also listen to and learn from their business counterparts. “IT leaders can’t create a plan to enable business priorities in a vacuum,” Haddad explains. “It’s better to ask [business] leaders to share their plans, removing the guesswork around business needs and intentions.” ... When IT and the business fail to align, silos begin to form. “In these silos, there’s minimal interaction between parties, which leads to misaligned expectations and project failures because the IT actions do not match up with the company direction and roadmap,” Bronson says. “When companies employ a reactive rather than a proactive approach, the result is an IT function that’s more focused on putting out fires than being a value-add to the business.”


Edge Extending the Reach of the Data Center

Savings in communications can be achieved, and low-latency transactions can be realized if mini-data centers containing servers, storage and other edge equipment are located proximate to where users work. Industrial manufacturing is a prime example. In this case, a single server can run entire assembly lines and robotics without the need to tap into the central data center. Data that is relevant to the central data center can be sent later in a batch transaction at the end of a shift. ... Organizations are also choosing to co-locate IT in the cloud. This can reduce the cost of on-site hardware and software, although it does increase the cost of processing transactions and may introduce some latency into the transactions being processed. In both cases, there are overarching network management tools that enable IT to see, monitor and maintain network assets, data, and applications no matter where they are. ... Most IT departments are not at a point where they have all of their IT under a central management system, with the ability to see, tune, monitor and/or mitigate any event or activity anywhere. However, we are at a point where most CIOs recognize the necessity of funding and building a roadmap to this “uber management” network concept.


Orchestrator agents: Integration, human interaction, and enterprise knowledge at the core

“Effective orchestration agents support integrations with multiple enterprise systems, enabling them to pull data and execute actions across the organizations,” Zllbershot said. “This holistic approach provides the orchestration agent with a deep understanding of the business context, allowing for intelligent, contextual task management and prioritization.” For now, AI agents exist in islands within themselves. However, service providers like ServiceNow and Slack have begun integrating with other agents. ... Although AI agents are designed to go through workflows automatically, experts said it’s still important that the handoff between human employees and AI agents goes smoothly. The orchestration agent allows humans to see where the agents are in the workflow and lets the agent figure out its path to complete the task. “An ideal orchestration agent allows for visual definition of the process, has rich auditing capability, and can leverage its AI to make recommendations and guidance on the best actions. At the same time, it needs a data virtualization layer to ensure orchestration logic is separated from the complexity of back-end data stores,” said Pega’s Schuerman.


The Transformative Potential of Edge Computing

Edge computing devices like sensors continuously monitor the car’s performance, sending data back to the cloud for real-time analysis. This allows for early detection of potential issues, reducing the likelihood of breakdowns and enabling proactive maintenance. As a result, the vehicle is more reliable and efficient, with reduced downtime. Each sensor relies on a hyperconnected network that seamlessly integrates data-driven intelligence, real-time analytics, and insights through an edge-to-cloud continuum – an interconnected ecosystem spanning diverse cloud services and technologies across various environments. By processing data at the edge, within the vehicle, the amount of data transmitted to the cloud is reduced. ... No matter the industry, edge computing and cloud technology require a reliable, scalable, and global hyperconnected network – a digital fabric – to deliver operational and innovative benefits to businesses and create new value and experiences for customers. A digital fabric is pivotal in shaping the future of infrastructure. It ensures that businesses can leverage the full potential of edge and cloud technologies by supporting the anticipated surge in network traffic, meeting growing connectivity demands, and addressing complex security requirements.


The risks and rewards of penetration testing

It is impossible to predict how systems may react to penetration testing. As was the case with our customer, an unknow flaw or misconfiguration can lead to catastrophic results. Skilled penetration testers usually can anticipate such issues. However, even the best white hats are imperfect. It is better to discover these flaws during a controlled test, then during a data breach. While performing tests, keep IT support staff available to respond to disruptions. Furthermore, do not be alarmed if your penetration testing provider asks you to sign an agreement that releases them from any liability due to testing. ... Black hats will generally follow the path of least resistance to break into systems. This means they will use well-known vulnerabilities they are confident they can exploit. Some hackers are still using ancient vulnerabilities, such as SQL injection, which date back to 1995. They use these because they work. It is uncommon for black hats to use unknown or “zero-day” exploits. These are reserved for high-value targets, such as government, military, or critical infrastructure. It is not feasible for white hats to test every possible way to exploit a system. Rather, they should focus on a broad set of commonly used exploits. Lastly, not every vulnerability is dangerous.


How Data Breaches Erode Trust and What Companies Can Do

A data breach can prompt customers to lose trust in an organisation, compelling them to take their business to a competitor whose reputation remains intact. A breach can discourage partners from continuing their relationship with a company since partners and vendors often share each other’s data, which may now be perceived as an elevated risk not worth taking. Reputational damage can devalue publicly traded companies and scupper a funding round for a private company. The financial cost of reputational damage may not be immediately apparent, but its consequences can reverberate for months and even years. ... In order to optimise cybersecurity efforts, organisations must consider the vulnerabilities particular to them and their industry. For example, financial institutions, often the target of more involved patterns like system intrusion, must invest in advanced perimeter security and threat detection. With internal actors factoring so heavily in healthcare, hospitals must prioritise cybersecurity training and stricter access controls. Major retailers that can’t afford extended downtime from a DoS attack must have contingency plans in place, including disaster recovery.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - November 19, 2024

AI-driven software testing gains more champions but worries persist

"There is a clear need to align quality engineering metrics with business outcomes and showcase the strategic value of quality initiatives to drive meaningful change," the survey's team of authors, led by Jeff Spevacek of OpenText, stated. "On the technology front, the adoption of newer, smarter test automation tools has driven the average level of test automation to 44%. However, the most transformative trend this year is the rapid adoption of AI, particularly Gen AI, which is set to make a huge impact." ... While AI offers great promise as a quality and testing tool, the study said there are "significant challenges in validating protocols, AI models, and the complexity of validation of all integrations. Currently, many organizations are struggling to implement comprehensive test strategies that ensure optimized coverage of critical areas. However, looking ahead, there is a strong expectation that AI will play a pivotal role in addressing these challenges and enhancing the effectiveness of testing activities in this domain." The key takeaway point from the research is that software quality engineering is rapidly evolving: "Once defined as testing human-written software, it has now evolved with AI-generated code."


How IAM Missteps Cause Data Breaches

Here’s where it gets complicated. Implementing least privilege requires an application’s requirements specifications to be available on demand with details of the hierarchy and context behind every interconnected resource. Developers rarely know exactly which permissions each service needs. For example to perform a read on an S3 bucket, we also need permissions to list contents of the S3 bucket. ... This is where we begin to be reactive and apply tools that scan for misconfigurations. Tools like AWS IAM Access Analyzer or Google Cloud’s IAM recommender are valuable for identifying risky permissions or potential overreach. However, if these tools become the primary line of defense, they can create a false sense of security. Most permission-checking tools are designed to analyze permissions at a point in time, often flagging issues after permissions are already in place. This reactive approach means that misconfigurations are only addressed after they occur, leaving systems vulnerable until the next scan. ... The solution lies in rethinking the way in which we wire up these relationships in the first place. Let’s take a look at two very simple pieces of code that both expose an API with a route to return a pre-signed URL from a cloud storage bucket.


Explainable AI: A question of evolution?

Inexplicable black boxes lead back to the bewitchment of the Sorting Hat; with real life tools we need to know how their decisions are made. As for the human-in-the-loop on whom we are pinning so much, if they are to step in and override AI decisions the humans better be on more than just speaking terms with their tools. Explanation is their job description. And it’s where the tools are used by the state to make decisions about us, our lives, liberty and livelihoods, that the need for explanation is greatest. Take a policing example. Whether or not drivers understand them we’ve been rubbing along with speed cameras for decades. What will AI-enabled road safety tools look and sound and think like? If they’re on speaking terms with our in-car telematics they’ll know what we’ve been up to behind the wheel for the last year not just the last mile. Will they be on speaking terms with juries, courts and public inquiries, reconstructing events that took place before they were even invented, together with all the attendant sounds, smells and sensation rather than just pics and stats? Much depends on the type of AI involved but even Narrow AI has given the police new reach like remote biometrics. 


Rethinking Documentation for Agile Teams

Documentation doesn’t need to be a separate task or deliverable to complete. During every meeting or asynchronous interaction, you can organically create documentation by using a virtual whiteboard to take notes, create visuals, and complete activities. ... Look for tools that can help you build and maintain your technical documentation with less effort. Modern visual collaboration solutions like Lucid offer advanced features to streamline documentation. These solutions can automatically generate various diagrams such as flowcharts, ERDs, org charts, and UML diagrams directly from your data. Some even incorporate AI assistance to help build and optimize diagrams. By using automation, teams can significantly reduce errors commonly associated with the manual creation of documentation. Another advantage of these platforms is the ability to link your data sources directly to your documents. This integration ensures your documentation stays up to date automatically, without requiring additional effort. What's more, advanced visual collaboration solutions integrate with project management tools like Jira and Azure DevOps. This integration allows teams to seamlessly share visuals between their chosen platforms, saving time and effort in keeping information synchronized across their environment.


Succeeding with observability in the cloud

The complexity of modern cloud environments amplifies the need for robust observability. Cloud applications today are built upon microservices, RESTful APIs, and containers, often spanning multicloud and hybrid architectures. This interconnectivity and distribution introduce layers of complexity that traditional monitoring paradigms struggle to capture. Observability addresses this by utilizing advanced analytics, artificial intelligence, and machine learning to analyze real-time logs, traces, and metrics, effectively transforming operational data into actionable insights. One of observability’s core strengths is its capacity to provide a continuous understanding of system operations, enabling proactive management instead of waiting for failures to manifest. Observability empowers teams to identify potential issues before they escalate, shifting from a reactive troubleshooting stance to a proactive optimization mindset. This capability is crucial in environments where systems must scale instantly to accommodate fluctuating demands while maintaining uninterrupted service.


How to Reduce VDI Costs

The onset of widespread remote work made the strategy much more prevalent, given that many organizations already had VDI infrastructure and experience. Due to its architectural design, infrastructure requirements scale more or less linearly with usage. But that means most organizations are often upside-down in their VDI investment — given that the costs are significant — and it seems that both practitioners and users have disdain for the experience. ... Maintaining VDI can be costly due to the need for patch management, hardware upgrades and support for end-user issues. An enterprise browser eliminates maintenance costs associated with traditional VDI systems because it requires no additional hardware. It also lowers administrative costs by centralizing controls within the browser, which reduces the need for multiple security tools and streamlines policy management. ... VDI solutions and their back-end systems can have substantial licensing fees, including the VDI platform and any extra licenses for the operating systems and apps used in VDI sessions. An enterprise browser can reduce the need for VDI by 80% to 90%, saving money on licensing costs. ... Ensuring secure and compliant endpoint interactions within a VDI session often requires additional endpoint controls and management solutions. 


Quantum computing: The future just got faster

Quantum computing holds promise for breakthroughs in many different industries. For example, scientists could use this technology to improve drug research by remodeling complex molecules and interactions that were previously computationally prohibitive. Complex optimization problems, like those encountered in logistics and supply chain management, could see solutions that drastically reduce costs and improve efficiency. Quantum computers could revolutionize cryptography by rapidly solving mathematical problems that underpin current encryption methods, posing both opportunities and significant security challenges. Sure, logistics and molecular simulations might sound far off for us regular folks, but there are applications that are right around the corner. For example, quantum computing could allow marketers to quickly analyze and process vast amounts of consumer data to identify trends, optimize ad placements, and tailor campaigns in real-time. While traditional data analysis might take hours or days to sift through customer preferences, a quantum computer could potentially complete this analysis in minutes, providing marketers with insights to adjust strategies almost instantaneously.


Why AI alone can’t protect you from sophisticated email threats

The battle between AI-based social engineering and AI-powered security measures is an ongoing one. Sophisticated attackers may develop techniques to evade AI detection, such as using ever more subtle and contextually accurate language, but security tools will then adapt to this, putting the pressure back on the attackers. So while AI-based behavioural analysis is a powerful tool in the fight against sophisticated social engineering attacks, it is most effective when used within a multi-layered defence strategy that includes security awareness training and other security measures. ... Alternative strategies for CISOs to consider include integrating AI and machine learning into the email security platform. AI/ML can analyse vast amounts of data in real time to identify anomalies and malicious patterns and respond accordingly. Behavioural analytics help detect unusual activities and patterns that indicate potential threats. ... Ensuring the security of email communications, especially with the involvement of third-party vendors, requires a comprehensive approach that is based both on security due diligence of the partner and effective security tools. Before engaging with any third party, an organisation should conduct a background check and security assessment.


Shortsighted CEOs leave CIOs with increasing tech debt

There’s a delicate balance between short- and long-term IT goals. A lot of the current focus with AI projects is to cut costs and drive efficiencies, but organizations also need to think about longer-term innovation, says Taylor Brown, co-founder and COO of Fivetran, vendor of a data management platform. “Every business, at some scale, is based on the decision of, ‘Do I continue to invest to make my product better and update it, or do I just keep driving the revenue that I have out of the product that I have?’” he says. “A lot of companies face this, and if you want to stay relevant, you want to compete and invest in innovation.” There are some companies that can probably survive by not thinking about long-term innovation, but they are few and far between, Brown says. “If you’re a technology company, then absolutely, you have to constantly be thinking about innovation, unless you have some crazy lock-in,” he adds. “In order to win new customers, you have to keep innovating.” Some IT leaders, however, aren’t convinced about the IBM report’s focus on IT shortcuts vs. innovation. IT spending is driven more by a desire to enable business goals, such as growth, and managing risks, including cyberattacks, says Yvette Kanouff, partner at JC2 Ventures, a tech-focused venture capital firm.


Musk’s anticipated cost-cutting hacks could weaken American cybersecurity

Although it’s too soon to predict what cybersecurity regulations DOGE might affect, experts say Musk might, at minimum, seek to strip regulatory power from agencies that align with some of his business interests, weakening their cybersecurity requirements or recommended practices in the process. Musk’s effort dovetails with what experts have already said: there is a high likelihood that the Trump administration will move to eliminate cybersecurity regulations. A landmark Supreme Court decision this summer that casts doubt on the future of all expert agency regulations reinforces this deregulatory direction. ... Even if Musk and the DOGE effort were to succeed in hacking back a significant number of regulations, experts say it won’t come easy. “One doesn’t know how enduring their relationship will be, nor how much of it is just going to be talk, nor how much opposition there might be in the state generally,” Tony Yates, former Professor of Economics at Birmingham University in the UK and a former senior advisor to the Bank of England, tells CSO. “The US has lots of checks and balances, many of which aren’t working as well as they used to,” he says. “But they’re still not entirely absent. So, it’s really hard to predict.”



Quote for the day:

“Success is not so much what we have, as it is what we are.” -- Jim Rohn

Daily Tech Digest - November 18, 2024

3 leadership lessons we can learn from ethical hackers

By nature, hackers possess a knack for looking beyond the obvious to find what’s hidden. They leverage their ingenuity and resourcefulness to address threats and anticipate future risks. And most importantly, they are unafraid to break things to make them better. Likewise, when leading an organization, you are often faced with problems that, from the outside, look unsurmountable. You must handle challenges that threaten your internal culture or your product roadmap, and it’s up to you to decide the right path toward progress. Now is the most critical time to find those hidden opportunities to strengthen your organization and remain fearless in your decisions toward a stronger path. ... Leaders must remove ego and cultivate open communication within their organizations. At HackerOne, we build accountability through company-wide weekly Ask Me Anything (AMA) sessions to share organizational knowledge, ask tough questions about the business, and encourage employees to share their perspectives openly without fear of retaliation. ... Most hackers are self-taught enthusiasts. Young and without formal cybersecurity training, they are driven by a passion for their craft. Internal drive propels them to continue their search for what others miss. If there is a way to see the gaps, they will find them. 


So, you don’t have a chief information security officer? 9 signs your company needs one

The cost to hire and retain a CISO is a major stumbling block for some organizations. Even promoting someone from within to a newly created CISO post can be expensive: total compensation for a full-time CISO in the US now averages $565,000 per year, not including other costs that often come with filling the position. ... Running cybersecurity on top of their own duties can be a tricky balancing act for some CIOs, says Cameron Smith, advisory lead for cybersecurity and data privacy at Info-Tech Research Group in London, Ontario. “A CIO has a lot of objectives or goals that don’t relate to security, and those sometimes conflict with one another. Security oftentimes can be at odds with certain productivity goals. But both of those (roles) should be aimed at advancing the success of the organization,” Smith says. ... A virtual CISO is one option for companies seeking to bolster cybersecurity without a full-time CISO. Black says this approach could make sense for companies trying to lighten the load of their overburdened CIO or CTO, as well as firms lacking the size, budget, or complexity to justify a permanent CISO. ... Not having a CISO in place could cost your company business with existing clients or prospective customers who operate in regulated sectors, expect their partners or suppliers to have a rigorous security framework, or require it for certain high-level projects.
Most importantly, AI agents can bring advanced capabilities, including real-time data analysis, predictive modeling, and autonomous decision-making, available to a much wider group of people in any organization. That, in turn, gives companies a way to harness the full potential of their data. Simply put, AI agents are rapidly becoming essential tools for business managers and data analysts in industrial businesses, including those in chemical production, manufacturing, energy sectors, and more. ... In the chemical industry, AI agents can monitor and control chemical processes in real time, minimizing risks associated with equipment failures, leaks, or hazardous reactions. By analyzing data from sensors and operational equipment, AI agents can predict potential failures and recommend preventive maintenance actions. This reduces downtime, improves safety, and enhances overall production efficiency. ... AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries. For business managers and data analysts, the key takeaway is clear: AI agents are not just a future possibility—they are a present necessity, capable of driving efficiency, innovation, and growth in today’s competitive industrial environment.


Want to Modernize Your Apps? Start By Modernizing Your Software Delivery Processes

A healthier approach to app modernization is to focus on modernizing your processes. Despite momentous changes in application deployment technology over the past decade or two, the development processes that best drive software innovation and efficiency — like the interrelated concepts and practices of agile, continuous integration/continuous delivery (CI/CD) and DevOps — have remained more or less the same. This is why modernizing your application delivery processes to take advantage of the most innovative techniques should be every business’s real focus. When your processes are modern, your ability to leverage modern technology and update apps quickly to take advantage of new technology follows naturally. ... In addition to modifying processes themselves, app modernization should also involve the goal of changing the way organizations think about processes in general. By this, I mean pushing developers, IT admins and managers to turn to automation by default when implementing processes. This might seem unnecessary because plenty of IT professionals today talk about the importance of automation. Yet, when it comes to implementing processes, they tend to lean toward manual approaches because they are faster and simpler to implement initially. 


The ‘Great IT Rebrand’: Restructuring IT for business success

To champion his reimagined vision for IT, BBNI’s Nester stresses the art of effective communication and the importance of a solid marketing campaign. In partnership with corporate communications, Nester established the Techniculture brand and lineup of related events specifically designed to align technology, business, and culture in support of enterprise goals. Quarterly Techniculture town hall meetings anchored by both business and technology leaders keep the several hundred Technology Solutions team members abreast of business priorities and familiar with the firm’s money-making mechanics, including a window into how technology helps achieve specific revenue goals, Nester explains. “It’s a can’t-miss event and our largest team engagement — even more so than the CEO videos,” he contends. The next pillar of the Techniculture foundation is Techniculture Live, an annual leadership summit. One third of the Technology Solutions Group, about 250 teammates by Nester’s estimates, participate in the event, which is not a deep dive into the latest technologies, but rather spotlights business performance and technology initiatives that have been most impactful to achieving corporate goals.


The Role of DSPM in Data Compliance: Going Beyond CSPM for Regulatory Success

DSPM is a data-focused approach to securing the cloud environment. By addressing cloud security from the angle of discovering sensitive data, DSPM is centered on protecting an organization’s valuable data. This approach helps organizations discover, classify, and protect data across all platforms, including IaaS, PaaS, and SaaS applications. Where CSPM is focused on finding vulnerabilities and risks for teams to remediate across the cloud environment, DSPM “gives security teams visibility into where cloud data is stored” and detects risks to that data. Security misconfigurations and vulnerabilities that may result in the exposure of data can be flagged by DSPM solutions for remediation, helping to protect an organization’s most sensitive resources. Beyond simply discovering sensitive data, DSPM solutions also address many questions of data access and governance. They provide insight into not only where sensitive data is located, but which users have access to it, how it is used, and the security posture of the data store. ... Every organization undoubtedly has valuable and sensitive enterprise, customer, and employee data that must be protected against a wide range of threats. Organizations can reap a great deal of benefits from DSPM in protecting data that is not stored on-premises.


The hidden challenges of AI development no one talks about

Currently, AI developers spend too much of their time (up to 75%) with the "tooling" they need to build applications. Unless they have the technology to spend less time tooling, these companies won't be able to scale their AI applications. To add to technical challenges, nearly every AI startup is reliant on NVIDIA GPU compute to train and run their AI models, especially at scale. Developing a good relationship with hardware suppliers or cloud providers like Paperspace can help startups, but the cost of purchasing or renting these machines quickly becomes the largest expense any smaller company will run into. Additionally, there is currently a battle to hire and keep AI talent. We've seen recently how companies like OpenAI are trying to poach talent from other heavy hitters like Google, which makes the process for attracting talent at smaller companies much more difficult. ... Training a Deep Learning model is almost always extremely expensive. This is a result of the combined function of resource costs for the hardware itself, data collection, and employees. In order to ameliorate this issue facing the industry's newest players, we aim to achieve several goals for our users: Creating an easy-to-use environment, introducing an inherent replicability across our products, and providing access at as low costs as possible.


Transforming code scanning and threat detection with GenAI

The complexity of software components and stacks can sometimes be mind-bending, so it is imperative to connect all these dots in as seamless and hands-free a way as possible. ... If you’re a developer with a mountain of feature requests and bug fixes on your plate and then receive a tsunami of security tickets that nobody’s incentivized to care about… guess which ones are getting pushed to the bottom of the pile? Generative AI-based agentic workflows are sparking the flames of cybersecurity and engineering teams alike to see the light at the end of the tunnel and consider the possibility that SSDLC is on the near-term horizon. And we’re seeing some promising changes already today in the market. Imagine having an intelligent assistant that can automatically track issues, figure out which ones matter most, suggest fixes, and then test and validate those fixes, all at the speed of computing! We still need our developers to oversee things and make the final calls, but the software agent swallows most of the burden of running an efficient program. ... AI’s evolution in code scanning fundamentally reshapes our approach to security. Optimized generative AI LLMs can assess millions of lines of code in seconds and pay attention to even the most subtle and nuanced set of patterns, finding the needle in a haystack, which is almost always by humans.


5 Tips for Optimizing Multi-Region Cloud Configurations

Multi-region cloud configurations get very complicated very quickly, especially for active-active environments where you’re replicating data constantly. Containerized microservice-based applications allow for faster startup times, but they also drive up the number of resources you’ll need. Even active-passive environments for cold backup-and-restore use cases are resource-heavy. You’ll still need a lot of instances, AMI IDs, snapshots, and more to achieve a reasonable disaster recovery turnaround time. ... The CAP theorem forces you to choose only two of the three options: consistency, availability, and partition tolerance. Since we’re configuring for multi-region, partition tolerance is non-negotiable, which leaves a battle between availability and consistency. Yes, you can hold onto both, but you’ll drive high costs and an outsized management burden. If you’re running active-passive environments, opt for consistency over availability. This allows you to use Platform-as-a-Service (PaaS) solutions to replicate your database to your passive region. ... For active-passive environments, routing isn’t a serious concern. You’ll use default priority global routing to support failover handling, end of story. But for active-active environments, you’ll want different routing policies depending on the situation in that region.


Why API-First Matters in an AI-Driven World

Implementing an API-first approach at scale is a nontrivial exercise. The fundamental reason for this is that API-first involves “people.” It’s central to the methodology that APIs are embraced as socio-technical assets, and therefore, it requires a change in how “people,” both technical and non-technical, work and collaborate. There are some common objections to adopting API-First within organizations that raise their head, as well as some newer framings, given the eagerness of many to participate in the AI-hyped landscape. ... Don’t try to design for all eventualities. Instead, follow good extensibility patterns that enable future evolution and design “just enough” of the API based on current needs. There are added benefits when you combine this tactic with API specifications, as you can get fast feedback loops on that design before any investments are made in writing code or creating test suites. ... An API-First approach is powerful precisely because it starts with a use-case-oriented mindset, thinking about the problem being solved and how best to present data that aligns with that solution. By exposing data thoughtfully through APIs, companies can encapsulate domain-specific knowledge, apply business logic, and ensure that data is served securely, self-service, and tailored to business needs. 



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves

Daily Tech Digest - November 17, 2024

Why Are User Acceptance Tests Such a Hassle?

In the reality of many projects, UAT often becomes irreplaceable and needs to be extensive, covering a larger part of the testing pyramid than recommended ... Automated end-to-end tests often fail to cover third-party integrations due to limited access and support, requiring UAT. For instance, if a system integrates with an analytics tool, any changes to the system may require stakeholders to verify the results on the tool as well. ... In industries such as finance, healthcare, or aviation, where regulatory compliance is critical, UATs must ensure that the software meets all legal and regulatory requirements. ... In projects involving intricate business workflows, many UATs may be necessary to cover all possible scenarios and edge cases. ... This process can quickly become complex when dealing with numerous test cases, engineering teams, and stakeholder groups. This complexity often results in significant manual effort in both testing and collaboration. Even though UATs are cumbersome, most companies do not automate them because they focus on validating business requirements and user experiences, which require subjective assessment. However, automating UAT can save testing hours and the effort to coordinate testing sessions.


The full-stack architect: A new lead role for crystalizing EA value

First, the full-stack architect could ensure the function’s other architects are indeed aligned, not only among themselves, but with stakeholders from both the business and engineering. That last bit shouldn’t be overlooked, Ma says. While much attention gets paid to the notion that architects should be able to work fluently with the business, they should, in fact, work just as fluently with Engineering, meaning that whoever steps into the role should wield deep technical expertise, an attribute vital to earning the respect of engineers, and one that more traditional enterprise architects lack. For both types of stakeholders, then, the full-stack architect could serve as a single point of contact. Less “telephone,” as it were. And it could clarify the value proposition of EA as a singular function — and with respect to the business it serves. Finally, the role would probably make a few other architects unnecessary, or at least allow them to concentrate more fully on their respective principal responsibilities. No longer would they have to coordinate their peers. Ma’s inspiration for the role finds its origin in the full-stack engineer, as Ma sees EA today evolving similarly to how software engineering evolved about 15 years ago. 


Groundbreaking 8-Photon Qubit Chip Accelerates Quantum Computing

Quantum circuits based on photonic qubits are among the most promising technologies currently under active research for building a universal quantum computer. Several photonic qubits can be integrated into a tiny silicon chip as small as a fingernail, and a large number of these tiny chips can be connected via optical fibers to form a vast network of qubits, enabling the realization of a universal quantum computer. Photonic quantum computers offer advantages in terms of scalability through optical networking, room-temperature operation, and the low energy consumption. ... The research team measured the Hong-Ou-Mandel effect, a fascinating quantum phenomenon in which two different photons entering from different directions can interfere and travel together along the same path. In another notable quantum experiment, they demonstrated a 4-qubit entangled state on a 4-qubit integrated circuit (5mm x 5mm). Recently, they have expanded their research to 8 photon experiments using an 8-qubit integrated circuit (10mm x 5mm). The researchers plan to fabricate 16-qubit chips within this year, followed by scaling up to 32-qubits as part of their ongoing research toward quantum computation.


Mastering The Role Of CISO: What The Job Really Entails

A big part of a CISO’s job is working effectively with other senior executives. Success isn’t just about technical prowess; it’s about building relationships and navigating the politics of the C-suite. Whether you’re collaborating with the CEO, CFO, CIO, or CLO, you must be able to work within a broader leadership context to align security goals with business objectives. One of the most important lessons I’ve learned is to involve key stakeholders early and often. Don’t wait until you have a finalized proposal to present; get input and feedback from the relevant parties—especially the CTO, CIO, CLO, and CFO—at every stage. This collaborative approach helps you refine your security plans, ensures they are aligned with the company’s broader strategy, and reduces the likelihood of pushback when it’s time to present your final recommendations. ... While technical expertise forms the foundation of the CISO role, much of the work comes down to creative problem-solving. Being a CISO is like being a puzzle solver—you need to look at your organization’s specific challenges, risks, and goals, and figure out how to put the pieces together in a way that addresses both current and future needs.


Why Future-proofing Cybersecurity Regulatory Frameworks Is Essential

As regulations evolve, ensuring the security and privacy of the personal information used in AI training looks set to become increasingly difficult, which could lead to severe consequences for both individuals and organizations. The same survey went on to reveal that 30% of developers believe that there is a general lack of understanding among regulators who are not equipped with the right set of skills to comprehend the technology they're tasked with regulating. With skills and knowledge in question, alongside rapidly advancing AI and cybersecurity threats, what exactly should regulators keep in mind when creating regulatory frameworks that are both adaptable and effective? It's my view that, firstly, regulators should know all the options on the table when it comes to possible privacy-enhancing technologies (PETs). ... Incorporating continuous learning within the organization is also crucial, as well as allowing employees to participate in industry events and conferences to stay up to speed on the latest developments and to meet with experts. Where possible, we should be creating collaborations with the industry — for example, inviting representatives of tech companies to give internal seminars or demonstrations.


AI could alter data science as we know it - here's why

Davenport and Barkin note that generative AI will take citizen development to a whole new level. "First is through conversational user interfaces," they write. "Virtually every vendor of software today has announced or is soon to introduce a generative AI interface." "Now or in the very near future, someone interested in programming or accessing/analyzing data need only make a request to an AI system in regular language for a program containing a set of particular functions, an automation workflow with key steps and decisions, or a machine-learning analysis involving particular variables or features." ... Looking beyond these early starts, with the growth of AI, RPA, and other tools, "some citizen developers are likely to no longer be necessary, and every citizen will need to change how they do their work," Davenport and Barkin speculate. ... "The rise of AI-driven tools capable of handling data analysis, modeling, and insight generation could force a shift in how we view the role and future of data science itself," said Ligot. "Tasks like data preparation, cleansing, and even basic qualitative analysis -- activities that consume much of a data scientist's time -- are now easily automated by AI systems."


Scaling Small Language Models (SLMs) For Edge Devices: A New Frontier In AI

Small language models (SLMs) are lightweight neural network models designed to perform specialized natural language processing tasks with fewer computational resources and parameters, typically ranging from a few million to several billion parameters. Unlike large language models (LLMs), which aim for general-purpose capabilities across a wide range of applications, SLMs are optimized for efficiency, making them ideal for deployment in resource-constrained environments such as mobile devices, wearables and edge computing systems. ... One way to make SLMs work on edge devices is through model compression. This reduces the model’s size without losing much performance. Quantization is a key technique that simplifies the model’s data, like turning 32-bit numbers into 8-bit, making the model faster and lighter while maintaining accuracy. Think of a smart speaker—quantization helps it respond quickly to voice commands without needing cloud processing. ... The growing prominence of SLMs is reshaping the AI world, placing a greater emphasis on efficiency, privacy and real-time functionality. For everyone from AI experts to product developers and everyday users, this shift opens up exciting possibilities where powerful AI can operate directly on the devices we use daily—no cloud required.


How To Ensure Your Cloud Project Doesn’t Fail

To get the best out of your team requires striking a delicate balance between discipline and freedom. A bunch of “computer nerds” might not produce much value if left completely to their own devices. But they also won’t be innovative if not given freedom to explore and mess around with ideas. When building your Cloud team, look beyond technical skills. Seek individuals who are curious, adaptable, and collaborative. These traits are crucial for navigating the ever-changing landscape of Cloud technology and fostering an environment of continuous innovation. ... Culture plays a pivotal role in successful Cloud adoption. To develop the right culture for Cloud innovation, start by clearly defining and communicating your company's values and goals. You should also work to foster an environment that encourages calculated risk-taking and learning from failures as well as promotes collaboration and knowledge sharing across teams. Finally, make sure to incentivise your culture by recognising and rewarding innovation, not just successful outcomes. ... Having a well-defined culture is just the first step. To truly harness the power of your talent, you need to embed your definition of talent into every aspect of your company's processes.


2025 Tech Predictions – A Year of Realisation, Regulations and Resilience

A number of businesses are expected to move workloads from the public cloud back to on-premises data centres to manage costs and improve efficiencies. This is the essence of data freedom – the ability to move and store data wherever you need it, with no vendor lock-in. Organisations that previously shifted to the public cloud now realise that a hybrid approach is more advantageous for achieving cloud economics. While the public cloud has its benefits, local infrastructure can offer superior control and performance in certain instances, such as for resource-intensive applications that need to remain closer to the edge. ... As these threats become more commonplace, businesses are expected to adopt more proactive cybersecurity strategies and advanced identity validation methods, such as voice authentication. The uptake of AI-powered solutions to prevent and prepare for cyberattacks is also expected to increase. ... Unsurprisingly, the continuous profileration of data into 2025 will see the introduction of new AI-focused roles. Chief AI Officers (CAIOs) are responsible for overseeing the ethical, responsible and effective use of AI across organisations and bridging the gap between technical teams and key stakeholders.


In an Age of AI, Cloud Security Skills Remain in Demand

While identifying and recruiting the right tech and security talent is crucial, cybersecurity experts note that organizations must make a conscientious choice to invest in cloud security, especially as more data is uploaded and stored within SaaS apps and third-party, infrastructure-as-a-service (IaaS) providers such as Amazon Web Services and Microsoft Azure. “To close the cloud security skills gap, organizations should prioritize cloud-specific security training and certifications for their IT staff,” Stephen Kowski, field CTO at security firm SlashNext, told Dice. “Implementing cloud-native security tools that provide comprehensive visibility and protection across multi-cloud environments can help mitigate risks. Engaging managed security service providers with cloud expertise can also supplement in-house capabilities and provide valuable guidance.” Jason Soroko, a senior Fellow at Sectigo, expressed similar sentiments when it comes to organizations assisting in building out their cloud security capabilities and developing the talent needed to fulfill this mission. “To close the cloud security skills gap, organizations should offer targeted training programs, support certification efforts and consider hiring experts to mentor existing teams,” Soroko told Dice. 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - November 16, 2024

New framework aims to keep AI safe in US critical infrastructure

According to a release issued by DHS, “this first-of-its kind resource was developed by and for entities at each layer of the AI supply chain: cloud and compute providers, AI developers, and critical infrastructure owners and operators — as well as the civil society and public sector entities that protect and advocate for consumers.” ... Naveen Chhabra, principal analyst with Forrester, said, “while average enterprises may not directly benefit from it, this is going to be an important framework for those that are investing in AI models.” ... Asked why he thinks DHS felt the need to create the framework, Chhabra said that developments in the AI industry are “unique, in the sense that the industry is going back to the government and asking for intervention in ensuring that we, collectively, develop safe and secure AI.” ... David Brauchler, technical director at cybersecurity vendor NCC sees the guidelines as a beginning, pointing out that frameworks like this are just a starting point for organizations, providing them with big picture guidelines, not roadmaps. He described the DHS initiative in an email as “representing another step in the ongoing evolution of AI governance and security that we’ve seen develop over the past two years. It doesn’t revolutionize the discussion, but it aligns many of the concerns associated with AI/ML systems with their relevant stakeholders.”


Building an Augmented-Connected Workforce

An augmented workforce can work faster and more efficiently thanks to seamless access to real-time diagnostics and analytics, as well as live remote assistance, observes Peter Zornio, CTO at Emerson, an automation technology vendor serving critical industries. "An augmented-connected workforce institutionalizes best practices across the enterprise and sustains the value it delivers to operational and business performance regardless of workforce size or travel restrictions," he says in an email interview. An augmented-connected workforce can also help fill some of the gaps many manufacturers currently face, Gaus says. "There are many jobs unfilled because workers aren't attracted to manufacturing, or lack the technological skills needed to fill them," he explains. ... For enterprises that have already invested in advanced digital technologies, the path leading to an augmented-connected workforce is already underway. The next step is ensuring a holistic approach when looking at tangible ways to achieve such a workforce. "Look at the tools your organization is already using -- AI, AR, VR, and so on -- and think about how you can scale them or connect them with your human talent," Gaus says. Yet advanced technologies alone aren't enough to guarantee long-term success.


DORA and why resilience (once again) matters to the board

DORA, though, might be overlooked because of its finance-specific focus. The act has not attracted the attention of NIS2, which sets out cybersecurity standards for 15 critical sectors in the EU economy. And NIS2 came into force in October; CIOs and hard-pressed compliance teams could be forgiven for not focusing on another piece of legislation that is due in the New Year. But ignoring DORA altogether would be short-sighted. Firstly, as Rodrigo Marcos, chair of the EU Council at cybersecurity body CREST points out, DORA is a law, not a framework or best practice guidelines. Failing to comply could lead to penalties. But DORA also covers third-party risks, which includes digital supply chains. The legislation extends to any third party supplying a financial services firm, if the service they supply is critical. This will include IT and communications suppliers, including cloud and software vendors. ... And CIOs are also putting more emphasis on resilience and recovery. In some ways, we have come full circle. Disaster recovery and business continuity were once mainstays of IT operations planning but moved down the list with the move to the cloud. Cyber attacks, and especially ransomware, have pushed both resilience and recovery right back up the agenda.


Data Is Not the New Oil: It’s More Like Uranium

Comparing data to uranium is an accurate analogy. Uranium is radioactive and it is imperative to handle it carefully to avoid radiation exposure, the effects of which are linked to serious health and safety concerns. Issues with the deployment of uranium, such as in reactors, for instance, can lead to radioactive fallouts that are expensive to contain and have long-term health consequences for impacted individuals. The possibility of uranium being stolen poses significant risks and global repercussions. Data exhibits similar characteristics. It is critical for it to be stored safely, and those who experience data theft are forced to deal with long-term consequences – identity theft and financial concerns, for example. An organization experiencing a cyberattack must deal with regulatory oversight and fines. In some cases, losing sensitive data can trigger significant global consequences. ... Maintaining a data chain of custody is paramount. Some companies allow all employees access to all records, which increases the surface area of a cyberattack, and compromised employees could lead to a data breach. Even a single compromised employee computer can lead to a more extensive hack. Consider the case of the nonprofit healthcare network Ascension, which operates 140 hospitals and 40 senior care facilities.


Palo Alto Reports Firewalls Exploited Using an Unknown Flaw

Palo Alto said the flaw is being remotely exploited, has a "critical" severity rating of 9.3 out of 10 on the CVSS scale and that mitigating the vulnerability should be treated with the "highest" urgency. One challenge for users: no patch is yet available to fix the vulnerability. Also, no CVE code has been allocated for tracking it. "As we investigate the threat activity, we are preparing to release fixes and threat prevention signatures as early as possible," Palo Alto said. "At this time, securing access to the management interface is the best recommended action." The company said it doesn't believe its Prisma Access or Cloud NGFW are at risk from these attacks. Cybersecurity researchers confirm that real-world details surrounding the attacks and flaws remain scant. "Rapid7 threat intelligence teams have also been monitoring rumors of a possible zero-day vulnerability, but until now, those rumors have been unsubstantiated," the cybersecurity firm said in a Friday blog post. Palo Alto first warned customers on Nov. 8 that it was investigating reports of a zero-day vulnerability in the management interface for some types of firewalls and urged them to lock down the interfaces. 


Award-winning palm biometrics study promises low-cost authentication

“By harnessing high-resolution mmWave signals to extract detailed palm characteristics,” he continued, “mmPalm presents an ubiquitous, convenient and cost-efficient option to meet the growing needs for secure access in a smart, interconnected world.” The mmPalm method employs mmWave technology, which is widely used in 5G networks, to capture a person’s palm characteristics by sending and analyzing reflected signals and thereby creating a unique palm print for each user. Beyond this, mmPalm also meets the difficulties that can arise in authentication technology like distance and hand orientation. The system uses a type of AI called the Conditional Generative Adversarial Network (cGAN) to learn different palm orientations and distances, and generates virtual profiles to fill in gaps. In addition, the system will adapt to different environments using a transfer learning framework so that mmPalm is suited to various settings. The system also builds virtual antennas to increase the spatial resolution of a commercial mmWave device. Tested with 30 participants over six months, mmPalm displayed a 99 percent accuracy rate and was resistant to impersonation, spoofing and other potential breaches.


Scaling From Simple to Complex Cache: Challenges and Solutions

To scale a cache effectively, you need to distribute data across multiple nodes through techniques like sharding or partitioning. This improves storage efficiency and ensures that each node only stores a portion of the data. ... A simple cache can often handle node failures through manual intervention or basic failover mechanisms. A larger, more complex cache requires robust fault-tolerance mechanisms. This includes data replication across multiple nodes, so if one node fails, others can take over seamlessly. This also includes more catastrophic failures, which may lead to significant down time as the data is reloaded into memory from the persistent store, a process known as warming up the cache. ... As the cache gets larger, pure caching solutions struggle to provide linear performance in terms of latency while also allowing for the control of infrastructure costs. Many caching products were written to be fast at small scale. Pushing them beyond what they were designed for exposes inefficiencies in underlying internal processes. Potential latency issues may arise as more and more data are cached. As a consequence, cache lookup times can increase as the cache is devoting more resources to managing the increased scale rather than serving traffic.


Understanding the Modern Web and the Privacy Riddle

The main question is users’ willingness to surrender their data and not question the usage of this data. This could be attributed to the effect of the virtual panopticon, where users believe they are cooperating with agencies (government or private) that claim to respect their privacy in exchange for services. The Universal ID project (Aadhar project) in India, for instance, began as a means to provide identity to the poor in order to deliver social services, but has gradually expanded its scope, leading to significant function creep. Originally intended for de-duplication and preventing ‘leakages,’ it later became essential for enabling private businesses, fostering a cashless economy, and tracking digital footprints. ... In the modern web, users occupy multiple roles—as service providers, users, and visitors—while adopting multiple personas. This shift requires greater information disclosure, as users benefit from the web’s capabilities and treat their own data as currency. The unraveling of privacy has become the new norm, where withholding information is no longer an option due to the stigmatization of secrecy. Over the past few years, there has been a significant shift in how consumers and websites view privacy. Users have developed a heightened sensitivity to the use of their personal information and now recognize their basic right to internet privacy.


Databases Are a Top Target for Cybercriminals: How to Combat Them

Most ransomware can encrypt pages within a database—Mailto, Sodinokibi (REvil), and Ragnar Locker—and destroy the database pages. This means the slow, unknown encryption of everything, from sensitive customer records to critical networks resources, including Active Director, DNS, and Exchange, and lifesaving patient health information. Because databases can continue to run even with corrupted pages, it can take longer to realize that they have been attacked. Most often, it is the wreckage of the attack that is usually found when the database is taken down for routine maintenance, and by that time, thousands of records could be gone. Databases are an attractive target for cybercriminals because they offer a wealth of information that can be used or sold on the dark web, potentially leading to further breaches and attacks. Industries such as healthcare, finance, logistics, education, and transportation are particularly vulnerable. The information contained in these databases is highly valuable, as it can be exploited for spamming, phishing, financial fraud, and tax fraud. Additionally, cybercriminals can sell this data for significant sums of money on dark web auctions or marketplaces.


The Impact of Cloud Transformation on IT Infrastructure

With digital transformation accelerating across industries, the IT ecosystem comprises traditional and cloud-native applications. This mixed environment demands a flexible, multi-cloud strategy to accommodate diverse application requirements and operational models. The ability to move workloads between public and private clouds has become essential, allowing companies to dynamically balance performance and cost considerations. We are committed to delivering cloud solutions supporting seamless workload migration and interoperability, empowering businesses to leverage the best of public and private clouds. ... With today’s service offerings and various tools, migrating between on-premises and cloud environments has become straightforward, enabling continuous optimization rather than one-time changes. Cloud-native applications, particularly containerization and microservices, are inherently optimized for public and private cloud setups, allowing for dynamic scaling and efficient resource use. To fully optimize, companies should adopt cloud-native principles, including automation, continuous integration, and orchestration, which streamline performance and resource efficiency. Robust tools like identity and access management (IAM), encryption, and automated security updates address security and reliability, ensuring compliance and data protection.



Quote for the day:

"The elevator to success is out of order. You’ll have to use the stairs…. One step at a time.” -- Rande Wilson