Daily Tech Digest - November 21, 2024

Building Resilient Cloud Architectures for Post-Disaster IT Recovery

A resilient cloud architecture is designed to maintain functionality and service quality during disruptive events. These architectures ensure that critical business applications remain accessible, data remains secure, and recovery times are minimized, allowing organizations to maintain operations even under adverse conditions. To achieve resilience, cloud architectures must be built with redundancy, reliability, and scalability in mind. This involves a combination of technologies, strategies, and architectural patterns that, when applied collect ... Cloud-based DRaaS solutions allow organizations to recover critical workloads quickly by replicating environments in a secondary cloud region. This ensures that essential services can be restored promptly in the event of a disruption. Automated backups, on the other hand, ensure that all extracted data is continually saved and stored in a secure environment. Using regular snapshots can also provide rapid restoration points, giving teams the ability to revert systems to a pre-disaster state efficiently. ... Infrastructure as code (IaC) allows for the automated setup and configuration of cloud resources, providing a faster recovery process after an incident. 


Agile Security Sprints: Baking Security into the SDLC

Making agile security sprints effective requires organizations to embrace security as a continuous, collaborative effort. The first step? Integrating security tasks into the product backlog right alongside functional requirements. This approach ensures that security considerations are tackled within the same sprint, allowing teams to address potential vulnerabilities as they arise — not after the fact when they're harder and more expensive to fix. ... By addressing security iteratively, teams can continuously improve their security posture, reducing the risk of vulnerabilities becoming unmanageable. Catching security issues early in the development lifecycle minimizes delays, enabling faster, more secure releases, which is critical in a competitive development landscape. The emphasis on collaboration between development and security teams breaks down silos, fostering a culture of shared responsibility and enhancing the overall security-consciousness of the organization. Quickly addressing security issues is often far more cost-effective than dealing with them post-deployment, making agile security sprints a necessary choice for organizations looking to balance speed with security.


The new paradigm: Architecting the data stack for AI agents

With the semantic layer and historical data-based reinforcement loop in place, organizations can power strong agentic AI systems. However, it’s important to note that building a data stack this way does not mean downplaying the usual best practices. This essentially means that the platform being used should ingest and process data in real-time from all major sources, have systems in place for ensuring the quality/richness of the data and then have robust access, governance and security policies in place to ensure responsible agent use. “Governance, access control, and data quality actually become more important in the age of AI agents. The tools to determine what services have access to what data become the method for ensuring that AI systems behave in compliance with the rules of data privacy. Data quality, meanwhile, determines how well an agent can perform a task,” Naveen Rao, VP of AI at Databricks, told VentureBeat. ... “No agent, no matter how high the quality or impressive the results, should see the light of day if the developers don’t have confidence that only the right people can access the right information/AI capability. This is why we started with the governance layer with Unity Catalog and have built our AI stack on top of that,” Rao emphasized.


Enhancing visibility for better security in multi-cloud and hybrid environments

The number one challenge for infrastructure and cloud security teams is visibility into their overall risk–especially in complex environments like cloud, hybrid cloud, containers, and Kubernetes. Kubernetes is now the tool of choice for orchestrating and running microservices in containers, but it has also been one of the last areas to catch speed from a security perspective, leaving many security teams feeling caught on their heels. This is true even if they have deployed admission control or have other container security measures in place. Teams need a security tool in place that can show them who is accessing their workloads and what is happening in them at any given moment, as these environments have an ephemeral nature to them. A lot of legacy tooling just has not kept up with this demand. The best visibility is achieved with tooling that allows for real-time visibility and real-time detection, not point-in-time snapshotting, which does not keep up with the ever-changing nature of modern cloud environments. To achieve better visibility in the cloud, automate security monitoring and alerting to reduce manual effort and ensure comprehensive coverage. Centralize security data using dashboards or log aggregation tools to consolidate insights from across your cloud platforms.


How Augmented Reality is Shaping EV Development and Design

Traditionally, prototyping has been a costly and time-consuming stage in vehicle development, often requiring multiple physical models and extensive trial and error. AR is disrupting this process by enabling engineers to create and test virtual prototypes before building physical ones. Through immersive visualizations, teams can virtually assess design aspects like fit, function, and aesthetics, streamlining modifications and significantly shortening development cycles. ... One of the key shifts in EV manufacturing is the emphasis on consumer-centric design. EV buyers today expect not just efficiency but also vehicles that reflect their lifestyle choices, from customizable interiors to cutting-edge tech features. AR offers manufacturers a way to directly engage consumers in the design process, offering a virtual showroom experience that enhances the customization journey. ... AR-assisted training is one frontier seeing a lot of adoption. By removing humans from dangerous scenarios while still allowing them to interact with those same scenarios, companies can increase safety while still offering practical training. In one example from Volvo, augmented reality is allowing first responders to assess damage on EV vehicles and proceed with caution.


Digital twins: The key to unlocking end-to-end supply chain growth

Digital twins can be used to model the interaction between physical and digital processes all along the supply chain—from product ideation and manufacturing to warehousing and distribution, from in-store or online purchases to shipping and returns. Thus, digital twins paint a clear picture of an optimal end-to-end supply chain process. What’s more, paired with today’s advances in predictive AI, digital twins can become both predictive and prescriptive. They can predict future scenarios to suggest areas for improvement or growth, ultimately leading to a self-monitoring and self-healing supply chain. In other words, digital twins empower the switch from heuristic-based supply chain management to dynamic and granular optimization, providing a 360-degree view of value and performance leakage. To understand how a self-healing supply chain might work in practice, let’s look at one example: using digital twins, a retailer sets dynamic SKU-level safety stock targets for each fulfillment center that dynamically evolve with localized and seasonal demand patterns. Moreover, this granular optimization is applied not just to inventory management but also to every part of the end-to-end supply chain—from procurement and product design to manufacturing and demand forecasting. 


Illegal Crypto Mining: How Businesses Can Prevent Themselves From Being ‘Cryptojacked’

Business leaders might believe that illegal crypto mining programs pose no risks to their operations. Considering the number of resources most businesses dedicate to cybersecurity, it might seem like a low priority in comparison to other risks. However, the successful deployment of malicious crypto mining software can lead to even more risks for businesses, putting their cybersecurity posture in jeopardy. Malware and other forms of malicious software can drain computing resources, cutting the life expectancy of computer hardware. This can decrease the long-term performance and productivity of all infected computers and devices. Additionally, the large amount of energy required to support the high computing power of crypto mining can drain electricity across the organization. But one of the most severe risks associated with malicious crypto mining software is that it can include other code that exploits existing vulnerabilities. ... While powerful cybersecurity tools are certainly important, there’s no single solution to combat illegal crypto mining. But there are different strategies that business leaders can implement to reduce the likelihood of a breach, and mitigating human error is among the most important. 


10 Most Impactful PAM Use Cases for Enhancing Organizational Security

Security extends beyond internal employees as collaborations with third parties also introduce vulnerabilities. PAM solutions allow you to provide vendors with time-limited, task-specific access to your systems and monitor their activity in real time. With PAM, you can also promptly revoke third-party access when a project is completed, ensuring no dormant accounts remain unattended. Suppose you engage third-party administrators to manage your database. In this case, PAM enables you to restrict their access based on a "need-to-know" basis, track their activities within your systems, and automatically remove their access once they complete the job. ... Reused or weak passwords are easy targets for attackers. Relying on manual password management adds another layer of risk, as it is both tedious and prone to human error. That's where PAM solutions with password management capabilities can make a difference. Such solutions can help you secure passwords throughout their entire lifecycle — from creation and storage to automatic rotation. By handling credentials with such PAM solutions and setting permissions according to user roles, you can make sure all the passwords are accessible only to authorized users. 


The Information Value Chain as a Framework for Tackling Disinformation

The information value chain has three stages: production, distribution, and consumption. Claire Wardle proposed an early version of this framework in 2017. Since then, scholars have suggested tackling disinformation through an economics lens. Using this approach, we can understand production as supply, consumption as demand, and distribution as a marketplace. In so doing, we can single out key stakeholders at each stage and determine how best to engage them to combat disinformation. By seeing disinformation as a commodity, we can better identify and address the underlying motivations ... When it comes to the disinformation marketplace, disinformation experts mostly agree it is appropriate to point the finger at Big Tech. Profit-driven social media platforms have understood for years that our attention is the ultimate gold mine and that inflammatory content is what attracts the most attention. There is, therefore, a direct correlation between how much disinformation circulates on a platform and how much money it makes from advertising. ... To tackle disinformation, we must think like economists, not just like fact-checkers, technologists, or investigators. We must understand the disinformation value chain and identify the actors and their incentives, obstacles, and motivations at each stage.


Why do developers love clean code but hate writing documentation?

In fast-paced development environments, particularly those adopting Agile methodologies, maintaining up-to-date documentation can be challenging. Developers often deprioritize documentation due to tight deadlines and a focus on delivering working code. This leads to informal, hard-to-understand documentation that quickly becomes outdated as the software evolves. Another significant issue is that documentation is frequently viewed as unnecessary overhead. Developers may believe that code should be self-explanatory or that documentation slows down the development process. ... To prevent documentation from becoming a second-class citizen in the software development lifecycle, Ferri-Beneditti argues that documentation needs to be observable, something that can be measured against the KPIs and goals developers and their managers often use when delivering projects. ... By offloading the burden of documentation creation onto AI, developers are free to stay in their flow state, focusing on the tasks they enjoy—building and problem-solving—while still ensuring that the documentation remains comprehensive and up-to-date. Perhaps most importantly, this synergy between GenAI and human developers does not remove human oversight. 



Quote for the day:

"The harder you work for something, the greater you'll feel when you achieve it." -- Unknown

Daily Tech Digest - November 20, 2024

5 Steps To Cross the Operational Chasm in Incident Management

A siloed approach to incident management slows down decision-making and harms cross-team communication during incidents. Instead, organizations must cultivate a cross-functional culture where all team members are able to collaborate seamlessly. Cross-functional collaboration ensures that incident response plans are comprehensive and account for the insights and expertise contained within specific teams. This communication can be expedited with the support of AI tools to summarize information and draft messages, as well as the use of automation for sharing regular updates. ... An important step in developing a proactive incident management strategy is conducting post-incident reviews. When incidents are resolved, teams are often so busy that they are forced to move on without examining the contributing factors or identifying where processes can be improved. Conducting blameless reviews after significant incidents — and ideally every incident — is crucial for continuously and iteratively improving the systems in which incidents occur. This should cover both the technological and human aspects. Reviews must be thorough and uncover process flaws, training gaps or system vulnerabilities to improve incident management.


How to transform your architecture review board

A modernized approach to architecture review boards should start with establishing a partnership, building trust, and seeking collaboration between business leaders, devops teams, and compliance functions. Everyone in the organization uses technology, and many leverage platforms that extend the boundaries of architecture. Winbush suggests that devops teams must also extend their collaboration to include enterprise architects and review boards. “Don’t see ARBs as roadblocks, and treat them as a trusted team that provides much-needed insight to protect the team and the business,” he suggests. ... “Architectural review boards remain important in agile environments but must evolve beyond manual processes, such as interviews with practitioners and conventional tools that hinder engineering velocity,” says Moti Rafalin, CEO and co-founder of vFunction. “To improve development and support innovation, ARBs should embrace AI-driven tools to visualize, document, and analyze architecture in real-time, streamline routine tasks, and govern app development to reduce complexity.” ... “Architectural observability and governance represent a paradigm shift, enabling proactive management of architecture and allowing architects to set guardrails for development to prevent microservices sprawl and resulting complexity,” adds Rafalin.


Business Internet Security: Everything You Need to Consider

Each device on your business’s network, from computers to mobile phones, represents a potential point of entry for hackers. Treat connected devices as a door to your Wi-Fi networks, ensuring each one is secure enough to protect the entire structure. ... Software updates often include vital security patches that address identified vulnerabilities. Delaying updates on your security software is like ignoring a leaky roof; if left unattended, it will only get worse. Patch management and regularly updating all software on all your devices, including antivirus software and operating systems, will minimize the risk of exploitation. ... With cyber threats continuing to evolve and become more sophisticated, businesses can never be complacent about internet security and protecting their private network and data. Taking proactive steps toward securing your digital infrastructure and safeguarding sensitive data is a critical business decision. Prioritizing robust internet security measures safeguards your small business and ensures you’re well-equipped to face whatever kind of threat may come your way. While implementing these security measures may seem daunting, partnering with the right internet service provider like Optimum can give you a head start on your cybersecurity journey.


How Google Cloud’s Information Security Chief Is Preparing For AI Attackers

To build out his team, Venables added key veterans of the security industry, including Taylor Lehmann, who led security engineering teams for the Americas at Amazon Web Services, and MK Palmore, a former FBI agent and field security officer at Palo Alto Networks. “You need to have folks on board who understand that security narrative and can go toe-to-toe and explain it to CIOs and CISOs,” Palmore told Forbes. “Our team specializes in having those conversations, those workshops, those direct interactions with customers.” ... Generally, a “CISO is going to meet with a very small subset of their clients,” said Charlie Winckless, senior director analyst on Gartner's Digital Workplace Security team. “But the ability to generate guidance on using Google Cloud from the office of the CISO, and make that widely available, is incredibly important.” Google is trying to do just that. Last summer, Venables co-led the development of Google’s Secure AI Framework, or SAIF, a set of guidelines and best practices for security professionals to safeguard their AI initiatives. It’s based on six core principles, including making sure organizations have automated defense tools to keep pace with new and existing security threats, and putting policies in place that make it faster for companies to get user feedback on newly deployed AI tools.


11 ways to ensure IT-business alignment

A key way to facilitate alignment is to become agile enough to stay ahead of the curve, and be adaptive to change, Bragg advises. The CIO should also speak early when sensing a possible business course deviation. “A modern digital corporation requires IT to be a good partner in driving to the future rather than dwelling on a stable state.” IT leaders also need to be agile enough to drive and support change, communicate effectively, and be transparent about current projects and initiatives. ... To build strong ties, IT leaders must also listen to and learn from their business counterparts. “IT leaders can’t create a plan to enable business priorities in a vacuum,” Haddad explains. “It’s better to ask [business] leaders to share their plans, removing the guesswork around business needs and intentions.” ... When IT and the business fail to align, silos begin to form. “In these silos, there’s minimal interaction between parties, which leads to misaligned expectations and project failures because the IT actions do not match up with the company direction and roadmap,” Bronson says. “When companies employ a reactive rather than a proactive approach, the result is an IT function that’s more focused on putting out fires than being a value-add to the business.”


Edge Extending the Reach of the Data Center

Savings in communications can be achieved, and low-latency transactions can be realized if mini-data centers containing servers, storage and other edge equipment are located proximate to where users work. Industrial manufacturing is a prime example. In this case, a single server can run entire assembly lines and robotics without the need to tap into the central data center. Data that is relevant to the central data center can be sent later in a batch transaction at the end of a shift. ... Organizations are also choosing to co-locate IT in the cloud. This can reduce the cost of on-site hardware and software, although it does increase the cost of processing transactions and may introduce some latency into the transactions being processed. In both cases, there are overarching network management tools that enable IT to see, monitor and maintain network assets, data, and applications no matter where they are. ... Most IT departments are not at a point where they have all of their IT under a central management system, with the ability to see, tune, monitor and/or mitigate any event or activity anywhere. However, we are at a point where most CIOs recognize the necessity of funding and building a roadmap to this “uber management” network concept.


Orchestrator agents: Integration, human interaction, and enterprise knowledge at the core

“Effective orchestration agents support integrations with multiple enterprise systems, enabling them to pull data and execute actions across the organizations,” Zllbershot said. “This holistic approach provides the orchestration agent with a deep understanding of the business context, allowing for intelligent, contextual task management and prioritization.” For now, AI agents exist in islands within themselves. However, service providers like ServiceNow and Slack have begun integrating with other agents. ... Although AI agents are designed to go through workflows automatically, experts said it’s still important that the handoff between human employees and AI agents goes smoothly. The orchestration agent allows humans to see where the agents are in the workflow and lets the agent figure out its path to complete the task. “An ideal orchestration agent allows for visual definition of the process, has rich auditing capability, and can leverage its AI to make recommendations and guidance on the best actions. At the same time, it needs a data virtualization layer to ensure orchestration logic is separated from the complexity of back-end data stores,” said Pega’s Schuerman.


The Transformative Potential of Edge Computing

Edge computing devices like sensors continuously monitor the car’s performance, sending data back to the cloud for real-time analysis. This allows for early detection of potential issues, reducing the likelihood of breakdowns and enabling proactive maintenance. As a result, the vehicle is more reliable and efficient, with reduced downtime. Each sensor relies on a hyperconnected network that seamlessly integrates data-driven intelligence, real-time analytics, and insights through an edge-to-cloud continuum – an interconnected ecosystem spanning diverse cloud services and technologies across various environments. By processing data at the edge, within the vehicle, the amount of data transmitted to the cloud is reduced. ... No matter the industry, edge computing and cloud technology require a reliable, scalable, and global hyperconnected network – a digital fabric – to deliver operational and innovative benefits to businesses and create new value and experiences for customers. A digital fabric is pivotal in shaping the future of infrastructure. It ensures that businesses can leverage the full potential of edge and cloud technologies by supporting the anticipated surge in network traffic, meeting growing connectivity demands, and addressing complex security requirements.


The risks and rewards of penetration testing

It is impossible to predict how systems may react to penetration testing. As was the case with our customer, an unknow flaw or misconfiguration can lead to catastrophic results. Skilled penetration testers usually can anticipate such issues. However, even the best white hats are imperfect. It is better to discover these flaws during a controlled test, then during a data breach. While performing tests, keep IT support staff available to respond to disruptions. Furthermore, do not be alarmed if your penetration testing provider asks you to sign an agreement that releases them from any liability due to testing. ... Black hats will generally follow the path of least resistance to break into systems. This means they will use well-known vulnerabilities they are confident they can exploit. Some hackers are still using ancient vulnerabilities, such as SQL injection, which date back to 1995. They use these because they work. It is uncommon for black hats to use unknown or “zero-day” exploits. These are reserved for high-value targets, such as government, military, or critical infrastructure. It is not feasible for white hats to test every possible way to exploit a system. Rather, they should focus on a broad set of commonly used exploits. Lastly, not every vulnerability is dangerous.


How Data Breaches Erode Trust and What Companies Can Do

A data breach can prompt customers to lose trust in an organisation, compelling them to take their business to a competitor whose reputation remains intact. A breach can discourage partners from continuing their relationship with a company since partners and vendors often share each other’s data, which may now be perceived as an elevated risk not worth taking. Reputational damage can devalue publicly traded companies and scupper a funding round for a private company. The financial cost of reputational damage may not be immediately apparent, but its consequences can reverberate for months and even years. ... In order to optimise cybersecurity efforts, organisations must consider the vulnerabilities particular to them and their industry. For example, financial institutions, often the target of more involved patterns like system intrusion, must invest in advanced perimeter security and threat detection. With internal actors factoring so heavily in healthcare, hospitals must prioritise cybersecurity training and stricter access controls. Major retailers that can’t afford extended downtime from a DoS attack must have contingency plans in place, including disaster recovery.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - November 19, 2024

AI-driven software testing gains more champions but worries persist

"There is a clear need to align quality engineering metrics with business outcomes and showcase the strategic value of quality initiatives to drive meaningful change," the survey's team of authors, led by Jeff Spevacek of OpenText, stated. "On the technology front, the adoption of newer, smarter test automation tools has driven the average level of test automation to 44%. However, the most transformative trend this year is the rapid adoption of AI, particularly Gen AI, which is set to make a huge impact." ... While AI offers great promise as a quality and testing tool, the study said there are "significant challenges in validating protocols, AI models, and the complexity of validation of all integrations. Currently, many organizations are struggling to implement comprehensive test strategies that ensure optimized coverage of critical areas. However, looking ahead, there is a strong expectation that AI will play a pivotal role in addressing these challenges and enhancing the effectiveness of testing activities in this domain." The key takeaway point from the research is that software quality engineering is rapidly evolving: "Once defined as testing human-written software, it has now evolved with AI-generated code."


How IAM Missteps Cause Data Breaches

Here’s where it gets complicated. Implementing least privilege requires an application’s requirements specifications to be available on demand with details of the hierarchy and context behind every interconnected resource. Developers rarely know exactly which permissions each service needs. For example to perform a read on an S3 bucket, we also need permissions to list contents of the S3 bucket. ... This is where we begin to be reactive and apply tools that scan for misconfigurations. Tools like AWS IAM Access Analyzer or Google Cloud’s IAM recommender are valuable for identifying risky permissions or potential overreach. However, if these tools become the primary line of defense, they can create a false sense of security. Most permission-checking tools are designed to analyze permissions at a point in time, often flagging issues after permissions are already in place. This reactive approach means that misconfigurations are only addressed after they occur, leaving systems vulnerable until the next scan. ... The solution lies in rethinking the way in which we wire up these relationships in the first place. Let’s take a look at two very simple pieces of code that both expose an API with a route to return a pre-signed URL from a cloud storage bucket.


Explainable AI: A question of evolution?

Inexplicable black boxes lead back to the bewitchment of the Sorting Hat; with real life tools we need to know how their decisions are made. As for the human-in-the-loop on whom we are pinning so much, if they are to step in and override AI decisions the humans better be on more than just speaking terms with their tools. Explanation is their job description. And it’s where the tools are used by the state to make decisions about us, our lives, liberty and livelihoods, that the need for explanation is greatest. Take a policing example. Whether or not drivers understand them we’ve been rubbing along with speed cameras for decades. What will AI-enabled road safety tools look and sound and think like? If they’re on speaking terms with our in-car telematics they’ll know what we’ve been up to behind the wheel for the last year not just the last mile. Will they be on speaking terms with juries, courts and public inquiries, reconstructing events that took place before they were even invented, together with all the attendant sounds, smells and sensation rather than just pics and stats? Much depends on the type of AI involved but even Narrow AI has given the police new reach like remote biometrics. 


Rethinking Documentation for Agile Teams

Documentation doesn’t need to be a separate task or deliverable to complete. During every meeting or asynchronous interaction, you can organically create documentation by using a virtual whiteboard to take notes, create visuals, and complete activities. ... Look for tools that can help you build and maintain your technical documentation with less effort. Modern visual collaboration solutions like Lucid offer advanced features to streamline documentation. These solutions can automatically generate various diagrams such as flowcharts, ERDs, org charts, and UML diagrams directly from your data. Some even incorporate AI assistance to help build and optimize diagrams. By using automation, teams can significantly reduce errors commonly associated with the manual creation of documentation. Another advantage of these platforms is the ability to link your data sources directly to your documents. This integration ensures your documentation stays up to date automatically, without requiring additional effort. What's more, advanced visual collaboration solutions integrate with project management tools like Jira and Azure DevOps. This integration allows teams to seamlessly share visuals between their chosen platforms, saving time and effort in keeping information synchronized across their environment.


Succeeding with observability in the cloud

The complexity of modern cloud environments amplifies the need for robust observability. Cloud applications today are built upon microservices, RESTful APIs, and containers, often spanning multicloud and hybrid architectures. This interconnectivity and distribution introduce layers of complexity that traditional monitoring paradigms struggle to capture. Observability addresses this by utilizing advanced analytics, artificial intelligence, and machine learning to analyze real-time logs, traces, and metrics, effectively transforming operational data into actionable insights. One of observability’s core strengths is its capacity to provide a continuous understanding of system operations, enabling proactive management instead of waiting for failures to manifest. Observability empowers teams to identify potential issues before they escalate, shifting from a reactive troubleshooting stance to a proactive optimization mindset. This capability is crucial in environments where systems must scale instantly to accommodate fluctuating demands while maintaining uninterrupted service.


How to Reduce VDI Costs

The onset of widespread remote work made the strategy much more prevalent, given that many organizations already had VDI infrastructure and experience. Due to its architectural design, infrastructure requirements scale more or less linearly with usage. But that means most organizations are often upside-down in their VDI investment — given that the costs are significant — and it seems that both practitioners and users have disdain for the experience. ... Maintaining VDI can be costly due to the need for patch management, hardware upgrades and support for end-user issues. An enterprise browser eliminates maintenance costs associated with traditional VDI systems because it requires no additional hardware. It also lowers administrative costs by centralizing controls within the browser, which reduces the need for multiple security tools and streamlines policy management. ... VDI solutions and their back-end systems can have substantial licensing fees, including the VDI platform and any extra licenses for the operating systems and apps used in VDI sessions. An enterprise browser can reduce the need for VDI by 80% to 90%, saving money on licensing costs. ... Ensuring secure and compliant endpoint interactions within a VDI session often requires additional endpoint controls and management solutions. 


Quantum computing: The future just got faster

Quantum computing holds promise for breakthroughs in many different industries. For example, scientists could use this technology to improve drug research by remodeling complex molecules and interactions that were previously computationally prohibitive. Complex optimization problems, like those encountered in logistics and supply chain management, could see solutions that drastically reduce costs and improve efficiency. Quantum computers could revolutionize cryptography by rapidly solving mathematical problems that underpin current encryption methods, posing both opportunities and significant security challenges. Sure, logistics and molecular simulations might sound far off for us regular folks, but there are applications that are right around the corner. For example, quantum computing could allow marketers to quickly analyze and process vast amounts of consumer data to identify trends, optimize ad placements, and tailor campaigns in real-time. While traditional data analysis might take hours or days to sift through customer preferences, a quantum computer could potentially complete this analysis in minutes, providing marketers with insights to adjust strategies almost instantaneously.


Why AI alone can’t protect you from sophisticated email threats

The battle between AI-based social engineering and AI-powered security measures is an ongoing one. Sophisticated attackers may develop techniques to evade AI detection, such as using ever more subtle and contextually accurate language, but security tools will then adapt to this, putting the pressure back on the attackers. So while AI-based behavioural analysis is a powerful tool in the fight against sophisticated social engineering attacks, it is most effective when used within a multi-layered defence strategy that includes security awareness training and other security measures. ... Alternative strategies for CISOs to consider include integrating AI and machine learning into the email security platform. AI/ML can analyse vast amounts of data in real time to identify anomalies and malicious patterns and respond accordingly. Behavioural analytics help detect unusual activities and patterns that indicate potential threats. ... Ensuring the security of email communications, especially with the involvement of third-party vendors, requires a comprehensive approach that is based both on security due diligence of the partner and effective security tools. Before engaging with any third party, an organisation should conduct a background check and security assessment.


Shortsighted CEOs leave CIOs with increasing tech debt

There’s a delicate balance between short- and long-term IT goals. A lot of the current focus with AI projects is to cut costs and drive efficiencies, but organizations also need to think about longer-term innovation, says Taylor Brown, co-founder and COO of Fivetran, vendor of a data management platform. “Every business, at some scale, is based on the decision of, ‘Do I continue to invest to make my product better and update it, or do I just keep driving the revenue that I have out of the product that I have?’” he says. “A lot of companies face this, and if you want to stay relevant, you want to compete and invest in innovation.” There are some companies that can probably survive by not thinking about long-term innovation, but they are few and far between, Brown says. “If you’re a technology company, then absolutely, you have to constantly be thinking about innovation, unless you have some crazy lock-in,” he adds. “In order to win new customers, you have to keep innovating.” Some IT leaders, however, aren’t convinced about the IBM report’s focus on IT shortcuts vs. innovation. IT spending is driven more by a desire to enable business goals, such as growth, and managing risks, including cyberattacks, says Yvette Kanouff, partner at JC2 Ventures, a tech-focused venture capital firm.


Musk’s anticipated cost-cutting hacks could weaken American cybersecurity

Although it’s too soon to predict what cybersecurity regulations DOGE might affect, experts say Musk might, at minimum, seek to strip regulatory power from agencies that align with some of his business interests, weakening their cybersecurity requirements or recommended practices in the process. Musk’s effort dovetails with what experts have already said: there is a high likelihood that the Trump administration will move to eliminate cybersecurity regulations. A landmark Supreme Court decision this summer that casts doubt on the future of all expert agency regulations reinforces this deregulatory direction. ... Even if Musk and the DOGE effort were to succeed in hacking back a significant number of regulations, experts say it won’t come easy. “One doesn’t know how enduring their relationship will be, nor how much of it is just going to be talk, nor how much opposition there might be in the state generally,” Tony Yates, former Professor of Economics at Birmingham University in the UK and a former senior advisor to the Bank of England, tells CSO. “The US has lots of checks and balances, many of which aren’t working as well as they used to,” he says. “But they’re still not entirely absent. So, it’s really hard to predict.”



Quote for the day:

“Success is not so much what we have, as it is what we are.” -- Jim Rohn

Daily Tech Digest - November 18, 2024

3 leadership lessons we can learn from ethical hackers

By nature, hackers possess a knack for looking beyond the obvious to find what’s hidden. They leverage their ingenuity and resourcefulness to address threats and anticipate future risks. And most importantly, they are unafraid to break things to make them better. Likewise, when leading an organization, you are often faced with problems that, from the outside, look unsurmountable. You must handle challenges that threaten your internal culture or your product roadmap, and it’s up to you to decide the right path toward progress. Now is the most critical time to find those hidden opportunities to strengthen your organization and remain fearless in your decisions toward a stronger path. ... Leaders must remove ego and cultivate open communication within their organizations. At HackerOne, we build accountability through company-wide weekly Ask Me Anything (AMA) sessions to share organizational knowledge, ask tough questions about the business, and encourage employees to share their perspectives openly without fear of retaliation. ... Most hackers are self-taught enthusiasts. Young and without formal cybersecurity training, they are driven by a passion for their craft. Internal drive propels them to continue their search for what others miss. If there is a way to see the gaps, they will find them. 


So, you don’t have a chief information security officer? 9 signs your company needs one

The cost to hire and retain a CISO is a major stumbling block for some organizations. Even promoting someone from within to a newly created CISO post can be expensive: total compensation for a full-time CISO in the US now averages $565,000 per year, not including other costs that often come with filling the position. ... Running cybersecurity on top of their own duties can be a tricky balancing act for some CIOs, says Cameron Smith, advisory lead for cybersecurity and data privacy at Info-Tech Research Group in London, Ontario. “A CIO has a lot of objectives or goals that don’t relate to security, and those sometimes conflict with one another. Security oftentimes can be at odds with certain productivity goals. But both of those (roles) should be aimed at advancing the success of the organization,” Smith says. ... A virtual CISO is one option for companies seeking to bolster cybersecurity without a full-time CISO. Black says this approach could make sense for companies trying to lighten the load of their overburdened CIO or CTO, as well as firms lacking the size, budget, or complexity to justify a permanent CISO. ... Not having a CISO in place could cost your company business with existing clients or prospective customers who operate in regulated sectors, expect their partners or suppliers to have a rigorous security framework, or require it for certain high-level projects.
Most importantly, AI agents can bring advanced capabilities, including real-time data analysis, predictive modeling, and autonomous decision-making, available to a much wider group of people in any organization. That, in turn, gives companies a way to harness the full potential of their data. Simply put, AI agents are rapidly becoming essential tools for business managers and data analysts in industrial businesses, including those in chemical production, manufacturing, energy sectors, and more. ... In the chemical industry, AI agents can monitor and control chemical processes in real time, minimizing risks associated with equipment failures, leaks, or hazardous reactions. By analyzing data from sensors and operational equipment, AI agents can predict potential failures and recommend preventive maintenance actions. This reduces downtime, improves safety, and enhances overall production efficiency. ... AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries. For business managers and data analysts, the key takeaway is clear: AI agents are not just a future possibility—they are a present necessity, capable of driving efficiency, innovation, and growth in today’s competitive industrial environment.


Want to Modernize Your Apps? Start By Modernizing Your Software Delivery Processes

A healthier approach to app modernization is to focus on modernizing your processes. Despite momentous changes in application deployment technology over the past decade or two, the development processes that best drive software innovation and efficiency — like the interrelated concepts and practices of agile, continuous integration/continuous delivery (CI/CD) and DevOps — have remained more or less the same. This is why modernizing your application delivery processes to take advantage of the most innovative techniques should be every business’s real focus. When your processes are modern, your ability to leverage modern technology and update apps quickly to take advantage of new technology follows naturally. ... In addition to modifying processes themselves, app modernization should also involve the goal of changing the way organizations think about processes in general. By this, I mean pushing developers, IT admins and managers to turn to automation by default when implementing processes. This might seem unnecessary because plenty of IT professionals today talk about the importance of automation. Yet, when it comes to implementing processes, they tend to lean toward manual approaches because they are faster and simpler to implement initially. 


The ‘Great IT Rebrand’: Restructuring IT for business success

To champion his reimagined vision for IT, BBNI’s Nester stresses the art of effective communication and the importance of a solid marketing campaign. In partnership with corporate communications, Nester established the Techniculture brand and lineup of related events specifically designed to align technology, business, and culture in support of enterprise goals. Quarterly Techniculture town hall meetings anchored by both business and technology leaders keep the several hundred Technology Solutions team members abreast of business priorities and familiar with the firm’s money-making mechanics, including a window into how technology helps achieve specific revenue goals, Nester explains. “It’s a can’t-miss event and our largest team engagement — even more so than the CEO videos,” he contends. The next pillar of the Techniculture foundation is Techniculture Live, an annual leadership summit. One third of the Technology Solutions Group, about 250 teammates by Nester’s estimates, participate in the event, which is not a deep dive into the latest technologies, but rather spotlights business performance and technology initiatives that have been most impactful to achieving corporate goals.


The Role of DSPM in Data Compliance: Going Beyond CSPM for Regulatory Success

DSPM is a data-focused approach to securing the cloud environment. By addressing cloud security from the angle of discovering sensitive data, DSPM is centered on protecting an organization’s valuable data. This approach helps organizations discover, classify, and protect data across all platforms, including IaaS, PaaS, and SaaS applications. Where CSPM is focused on finding vulnerabilities and risks for teams to remediate across the cloud environment, DSPM “gives security teams visibility into where cloud data is stored” and detects risks to that data. Security misconfigurations and vulnerabilities that may result in the exposure of data can be flagged by DSPM solutions for remediation, helping to protect an organization’s most sensitive resources. Beyond simply discovering sensitive data, DSPM solutions also address many questions of data access and governance. They provide insight into not only where sensitive data is located, but which users have access to it, how it is used, and the security posture of the data store. ... Every organization undoubtedly has valuable and sensitive enterprise, customer, and employee data that must be protected against a wide range of threats. Organizations can reap a great deal of benefits from DSPM in protecting data that is not stored on-premises.


The hidden challenges of AI development no one talks about

Currently, AI developers spend too much of their time (up to 75%) with the "tooling" they need to build applications. Unless they have the technology to spend less time tooling, these companies won't be able to scale their AI applications. To add to technical challenges, nearly every AI startup is reliant on NVIDIA GPU compute to train and run their AI models, especially at scale. Developing a good relationship with hardware suppliers or cloud providers like Paperspace can help startups, but the cost of purchasing or renting these machines quickly becomes the largest expense any smaller company will run into. Additionally, there is currently a battle to hire and keep AI talent. We've seen recently how companies like OpenAI are trying to poach talent from other heavy hitters like Google, which makes the process for attracting talent at smaller companies much more difficult. ... Training a Deep Learning model is almost always extremely expensive. This is a result of the combined function of resource costs for the hardware itself, data collection, and employees. In order to ameliorate this issue facing the industry's newest players, we aim to achieve several goals for our users: Creating an easy-to-use environment, introducing an inherent replicability across our products, and providing access at as low costs as possible.


Transforming code scanning and threat detection with GenAI

The complexity of software components and stacks can sometimes be mind-bending, so it is imperative to connect all these dots in as seamless and hands-free a way as possible. ... If you’re a developer with a mountain of feature requests and bug fixes on your plate and then receive a tsunami of security tickets that nobody’s incentivized to care about… guess which ones are getting pushed to the bottom of the pile? Generative AI-based agentic workflows are sparking the flames of cybersecurity and engineering teams alike to see the light at the end of the tunnel and consider the possibility that SSDLC is on the near-term horizon. And we’re seeing some promising changes already today in the market. Imagine having an intelligent assistant that can automatically track issues, figure out which ones matter most, suggest fixes, and then test and validate those fixes, all at the speed of computing! We still need our developers to oversee things and make the final calls, but the software agent swallows most of the burden of running an efficient program. ... AI’s evolution in code scanning fundamentally reshapes our approach to security. Optimized generative AI LLMs can assess millions of lines of code in seconds and pay attention to even the most subtle and nuanced set of patterns, finding the needle in a haystack, which is almost always by humans.


5 Tips for Optimizing Multi-Region Cloud Configurations

Multi-region cloud configurations get very complicated very quickly, especially for active-active environments where you’re replicating data constantly. Containerized microservice-based applications allow for faster startup times, but they also drive up the number of resources you’ll need. Even active-passive environments for cold backup-and-restore use cases are resource-heavy. You’ll still need a lot of instances, AMI IDs, snapshots, and more to achieve a reasonable disaster recovery turnaround time. ... The CAP theorem forces you to choose only two of the three options: consistency, availability, and partition tolerance. Since we’re configuring for multi-region, partition tolerance is non-negotiable, which leaves a battle between availability and consistency. Yes, you can hold onto both, but you’ll drive high costs and an outsized management burden. If you’re running active-passive environments, opt for consistency over availability. This allows you to use Platform-as-a-Service (PaaS) solutions to replicate your database to your passive region. ... For active-passive environments, routing isn’t a serious concern. You’ll use default priority global routing to support failover handling, end of story. But for active-active environments, you’ll want different routing policies depending on the situation in that region.


Why API-First Matters in an AI-Driven World

Implementing an API-first approach at scale is a nontrivial exercise. The fundamental reason for this is that API-first involves “people.” It’s central to the methodology that APIs are embraced as socio-technical assets, and therefore, it requires a change in how “people,” both technical and non-technical, work and collaborate. There are some common objections to adopting API-First within organizations that raise their head, as well as some newer framings, given the eagerness of many to participate in the AI-hyped landscape. ... Don’t try to design for all eventualities. Instead, follow good extensibility patterns that enable future evolution and design “just enough” of the API based on current needs. There are added benefits when you combine this tactic with API specifications, as you can get fast feedback loops on that design before any investments are made in writing code or creating test suites. ... An API-First approach is powerful precisely because it starts with a use-case-oriented mindset, thinking about the problem being solved and how best to present data that aligns with that solution. By exposing data thoughtfully through APIs, companies can encapsulate domain-specific knowledge, apply business logic, and ensure that data is served securely, self-service, and tailored to business needs. 



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves

Daily Tech Digest - November 17, 2024

Why Are User Acceptance Tests Such a Hassle?

In the reality of many projects, UAT often becomes irreplaceable and needs to be extensive, covering a larger part of the testing pyramid than recommended ... Automated end-to-end tests often fail to cover third-party integrations due to limited access and support, requiring UAT. For instance, if a system integrates with an analytics tool, any changes to the system may require stakeholders to verify the results on the tool as well. ... In industries such as finance, healthcare, or aviation, where regulatory compliance is critical, UATs must ensure that the software meets all legal and regulatory requirements. ... In projects involving intricate business workflows, many UATs may be necessary to cover all possible scenarios and edge cases. ... This process can quickly become complex when dealing with numerous test cases, engineering teams, and stakeholder groups. This complexity often results in significant manual effort in both testing and collaboration. Even though UATs are cumbersome, most companies do not automate them because they focus on validating business requirements and user experiences, which require subjective assessment. However, automating UAT can save testing hours and the effort to coordinate testing sessions.


The full-stack architect: A new lead role for crystalizing EA value

First, the full-stack architect could ensure the function’s other architects are indeed aligned, not only among themselves, but with stakeholders from both the business and engineering. That last bit shouldn’t be overlooked, Ma says. While much attention gets paid to the notion that architects should be able to work fluently with the business, they should, in fact, work just as fluently with Engineering, meaning that whoever steps into the role should wield deep technical expertise, an attribute vital to earning the respect of engineers, and one that more traditional enterprise architects lack. For both types of stakeholders, then, the full-stack architect could serve as a single point of contact. Less “telephone,” as it were. And it could clarify the value proposition of EA as a singular function — and with respect to the business it serves. Finally, the role would probably make a few other architects unnecessary, or at least allow them to concentrate more fully on their respective principal responsibilities. No longer would they have to coordinate their peers. Ma’s inspiration for the role finds its origin in the full-stack engineer, as Ma sees EA today evolving similarly to how software engineering evolved about 15 years ago. 


Groundbreaking 8-Photon Qubit Chip Accelerates Quantum Computing

Quantum circuits based on photonic qubits are among the most promising technologies currently under active research for building a universal quantum computer. Several photonic qubits can be integrated into a tiny silicon chip as small as a fingernail, and a large number of these tiny chips can be connected via optical fibers to form a vast network of qubits, enabling the realization of a universal quantum computer. Photonic quantum computers offer advantages in terms of scalability through optical networking, room-temperature operation, and the low energy consumption. ... The research team measured the Hong-Ou-Mandel effect, a fascinating quantum phenomenon in which two different photons entering from different directions can interfere and travel together along the same path. In another notable quantum experiment, they demonstrated a 4-qubit entangled state on a 4-qubit integrated circuit (5mm x 5mm). Recently, they have expanded their research to 8 photon experiments using an 8-qubit integrated circuit (10mm x 5mm). The researchers plan to fabricate 16-qubit chips within this year, followed by scaling up to 32-qubits as part of their ongoing research toward quantum computation.


Mastering The Role Of CISO: What The Job Really Entails

A big part of a CISO’s job is working effectively with other senior executives. Success isn’t just about technical prowess; it’s about building relationships and navigating the politics of the C-suite. Whether you’re collaborating with the CEO, CFO, CIO, or CLO, you must be able to work within a broader leadership context to align security goals with business objectives. One of the most important lessons I’ve learned is to involve key stakeholders early and often. Don’t wait until you have a finalized proposal to present; get input and feedback from the relevant parties—especially the CTO, CIO, CLO, and CFO—at every stage. This collaborative approach helps you refine your security plans, ensures they are aligned with the company’s broader strategy, and reduces the likelihood of pushback when it’s time to present your final recommendations. ... While technical expertise forms the foundation of the CISO role, much of the work comes down to creative problem-solving. Being a CISO is like being a puzzle solver—you need to look at your organization’s specific challenges, risks, and goals, and figure out how to put the pieces together in a way that addresses both current and future needs.


Why Future-proofing Cybersecurity Regulatory Frameworks Is Essential

As regulations evolve, ensuring the security and privacy of the personal information used in AI training looks set to become increasingly difficult, which could lead to severe consequences for both individuals and organizations. The same survey went on to reveal that 30% of developers believe that there is a general lack of understanding among regulators who are not equipped with the right set of skills to comprehend the technology they're tasked with regulating. With skills and knowledge in question, alongside rapidly advancing AI and cybersecurity threats, what exactly should regulators keep in mind when creating regulatory frameworks that are both adaptable and effective? It's my view that, firstly, regulators should know all the options on the table when it comes to possible privacy-enhancing technologies (PETs). ... Incorporating continuous learning within the organization is also crucial, as well as allowing employees to participate in industry events and conferences to stay up to speed on the latest developments and to meet with experts. Where possible, we should be creating collaborations with the industry — for example, inviting representatives of tech companies to give internal seminars or demonstrations.


AI could alter data science as we know it - here's why

Davenport and Barkin note that generative AI will take citizen development to a whole new level. "First is through conversational user interfaces," they write. "Virtually every vendor of software today has announced or is soon to introduce a generative AI interface." "Now or in the very near future, someone interested in programming or accessing/analyzing data need only make a request to an AI system in regular language for a program containing a set of particular functions, an automation workflow with key steps and decisions, or a machine-learning analysis involving particular variables or features." ... Looking beyond these early starts, with the growth of AI, RPA, and other tools, "some citizen developers are likely to no longer be necessary, and every citizen will need to change how they do their work," Davenport and Barkin speculate. ... "The rise of AI-driven tools capable of handling data analysis, modeling, and insight generation could force a shift in how we view the role and future of data science itself," said Ligot. "Tasks like data preparation, cleansing, and even basic qualitative analysis -- activities that consume much of a data scientist's time -- are now easily automated by AI systems."


Scaling Small Language Models (SLMs) For Edge Devices: A New Frontier In AI

Small language models (SLMs) are lightweight neural network models designed to perform specialized natural language processing tasks with fewer computational resources and parameters, typically ranging from a few million to several billion parameters. Unlike large language models (LLMs), which aim for general-purpose capabilities across a wide range of applications, SLMs are optimized for efficiency, making them ideal for deployment in resource-constrained environments such as mobile devices, wearables and edge computing systems. ... One way to make SLMs work on edge devices is through model compression. This reduces the model’s size without losing much performance. Quantization is a key technique that simplifies the model’s data, like turning 32-bit numbers into 8-bit, making the model faster and lighter while maintaining accuracy. Think of a smart speaker—quantization helps it respond quickly to voice commands without needing cloud processing. ... The growing prominence of SLMs is reshaping the AI world, placing a greater emphasis on efficiency, privacy and real-time functionality. For everyone from AI experts to product developers and everyday users, this shift opens up exciting possibilities where powerful AI can operate directly on the devices we use daily—no cloud required.


How To Ensure Your Cloud Project Doesn’t Fail

To get the best out of your team requires striking a delicate balance between discipline and freedom. A bunch of “computer nerds” might not produce much value if left completely to their own devices. But they also won’t be innovative if not given freedom to explore and mess around with ideas. When building your Cloud team, look beyond technical skills. Seek individuals who are curious, adaptable, and collaborative. These traits are crucial for navigating the ever-changing landscape of Cloud technology and fostering an environment of continuous innovation. ... Culture plays a pivotal role in successful Cloud adoption. To develop the right culture for Cloud innovation, start by clearly defining and communicating your company's values and goals. You should also work to foster an environment that encourages calculated risk-taking and learning from failures as well as promotes collaboration and knowledge sharing across teams. Finally, make sure to incentivise your culture by recognising and rewarding innovation, not just successful outcomes. ... Having a well-defined culture is just the first step. To truly harness the power of your talent, you need to embed your definition of talent into every aspect of your company's processes.


2025 Tech Predictions – A Year of Realisation, Regulations and Resilience

A number of businesses are expected to move workloads from the public cloud back to on-premises data centres to manage costs and improve efficiencies. This is the essence of data freedom – the ability to move and store data wherever you need it, with no vendor lock-in. Organisations that previously shifted to the public cloud now realise that a hybrid approach is more advantageous for achieving cloud economics. While the public cloud has its benefits, local infrastructure can offer superior control and performance in certain instances, such as for resource-intensive applications that need to remain closer to the edge. ... As these threats become more commonplace, businesses are expected to adopt more proactive cybersecurity strategies and advanced identity validation methods, such as voice authentication. The uptake of AI-powered solutions to prevent and prepare for cyberattacks is also expected to increase. ... Unsurprisingly, the continuous profileration of data into 2025 will see the introduction of new AI-focused roles. Chief AI Officers (CAIOs) are responsible for overseeing the ethical, responsible and effective use of AI across organisations and bridging the gap between technical teams and key stakeholders.


In an Age of AI, Cloud Security Skills Remain in Demand

While identifying and recruiting the right tech and security talent is crucial, cybersecurity experts note that organizations must make a conscientious choice to invest in cloud security, especially as more data is uploaded and stored within SaaS apps and third-party, infrastructure-as-a-service (IaaS) providers such as Amazon Web Services and Microsoft Azure. “To close the cloud security skills gap, organizations should prioritize cloud-specific security training and certifications for their IT staff,” Stephen Kowski, field CTO at security firm SlashNext, told Dice. “Implementing cloud-native security tools that provide comprehensive visibility and protection across multi-cloud environments can help mitigate risks. Engaging managed security service providers with cloud expertise can also supplement in-house capabilities and provide valuable guidance.” Jason Soroko, a senior Fellow at Sectigo, expressed similar sentiments when it comes to organizations assisting in building out their cloud security capabilities and developing the talent needed to fulfill this mission. “To close the cloud security skills gap, organizations should offer targeted training programs, support certification efforts and consider hiring experts to mentor existing teams,” Soroko told Dice. 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson