Daily Tech Digest - November 18, 2024

3 leadership lessons we can learn from ethical hackers

By nature, hackers possess a knack for looking beyond the obvious to find what’s hidden. They leverage their ingenuity and resourcefulness to address threats and anticipate future risks. And most importantly, they are unafraid to break things to make them better. Likewise, when leading an organization, you are often faced with problems that, from the outside, look unsurmountable. You must handle challenges that threaten your internal culture or your product roadmap, and it’s up to you to decide the right path toward progress. Now is the most critical time to find those hidden opportunities to strengthen your organization and remain fearless in your decisions toward a stronger path. ... Leaders must remove ego and cultivate open communication within their organizations. At HackerOne, we build accountability through company-wide weekly Ask Me Anything (AMA) sessions to share organizational knowledge, ask tough questions about the business, and encourage employees to share their perspectives openly without fear of retaliation. ... Most hackers are self-taught enthusiasts. Young and without formal cybersecurity training, they are driven by a passion for their craft. Internal drive propels them to continue their search for what others miss. If there is a way to see the gaps, they will find them. 


So, you don’t have a chief information security officer? 9 signs your company needs one

The cost to hire and retain a CISO is a major stumbling block for some organizations. Even promoting someone from within to a newly created CISO post can be expensive: total compensation for a full-time CISO in the US now averages $565,000 per year, not including other costs that often come with filling the position. ... Running cybersecurity on top of their own duties can be a tricky balancing act for some CIOs, says Cameron Smith, advisory lead for cybersecurity and data privacy at Info-Tech Research Group in London, Ontario. “A CIO has a lot of objectives or goals that don’t relate to security, and those sometimes conflict with one another. Security oftentimes can be at odds with certain productivity goals. But both of those (roles) should be aimed at advancing the success of the organization,” Smith says. ... A virtual CISO is one option for companies seeking to bolster cybersecurity without a full-time CISO. Black says this approach could make sense for companies trying to lighten the load of their overburdened CIO or CTO, as well as firms lacking the size, budget, or complexity to justify a permanent CISO. ... Not having a CISO in place could cost your company business with existing clients or prospective customers who operate in regulated sectors, expect their partners or suppliers to have a rigorous security framework, or require it for certain high-level projects.
Most importantly, AI agents can bring advanced capabilities, including real-time data analysis, predictive modeling, and autonomous decision-making, available to a much wider group of people in any organization. That, in turn, gives companies a way to harness the full potential of their data. Simply put, AI agents are rapidly becoming essential tools for business managers and data analysts in industrial businesses, including those in chemical production, manufacturing, energy sectors, and more. ... In the chemical industry, AI agents can monitor and control chemical processes in real time, minimizing risks associated with equipment failures, leaks, or hazardous reactions. By analyzing data from sensors and operational equipment, AI agents can predict potential failures and recommend preventive maintenance actions. This reduces downtime, improves safety, and enhances overall production efficiency. ... AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries. For business managers and data analysts, the key takeaway is clear: AI agents are not just a future possibility—they are a present necessity, capable of driving efficiency, innovation, and growth in today’s competitive industrial environment.


Want to Modernize Your Apps? Start By Modernizing Your Software Delivery Processes

A healthier approach to app modernization is to focus on modernizing your processes. Despite momentous changes in application deployment technology over the past decade or two, the development processes that best drive software innovation and efficiency — like the interrelated concepts and practices of agile, continuous integration/continuous delivery (CI/CD) and DevOps — have remained more or less the same. This is why modernizing your application delivery processes to take advantage of the most innovative techniques should be every business’s real focus. When your processes are modern, your ability to leverage modern technology and update apps quickly to take advantage of new technology follows naturally. ... In addition to modifying processes themselves, app modernization should also involve the goal of changing the way organizations think about processes in general. By this, I mean pushing developers, IT admins and managers to turn to automation by default when implementing processes. This might seem unnecessary because plenty of IT professionals today talk about the importance of automation. Yet, when it comes to implementing processes, they tend to lean toward manual approaches because they are faster and simpler to implement initially. 


The ‘Great IT Rebrand’: Restructuring IT for business success

To champion his reimagined vision for IT, BBNI’s Nester stresses the art of effective communication and the importance of a solid marketing campaign. In partnership with corporate communications, Nester established the Techniculture brand and lineup of related events specifically designed to align technology, business, and culture in support of enterprise goals. Quarterly Techniculture town hall meetings anchored by both business and technology leaders keep the several hundred Technology Solutions team members abreast of business priorities and familiar with the firm’s money-making mechanics, including a window into how technology helps achieve specific revenue goals, Nester explains. “It’s a can’t-miss event and our largest team engagement — even more so than the CEO videos,” he contends. The next pillar of the Techniculture foundation is Techniculture Live, an annual leadership summit. One third of the Technology Solutions Group, about 250 teammates by Nester’s estimates, participate in the event, which is not a deep dive into the latest technologies, but rather spotlights business performance and technology initiatives that have been most impactful to achieving corporate goals.


The Role of DSPM in Data Compliance: Going Beyond CSPM for Regulatory Success

DSPM is a data-focused approach to securing the cloud environment. By addressing cloud security from the angle of discovering sensitive data, DSPM is centered on protecting an organization’s valuable data. This approach helps organizations discover, classify, and protect data across all platforms, including IaaS, PaaS, and SaaS applications. Where CSPM is focused on finding vulnerabilities and risks for teams to remediate across the cloud environment, DSPM “gives security teams visibility into where cloud data is stored” and detects risks to that data. Security misconfigurations and vulnerabilities that may result in the exposure of data can be flagged by DSPM solutions for remediation, helping to protect an organization’s most sensitive resources. Beyond simply discovering sensitive data, DSPM solutions also address many questions of data access and governance. They provide insight into not only where sensitive data is located, but which users have access to it, how it is used, and the security posture of the data store. ... Every organization undoubtedly has valuable and sensitive enterprise, customer, and employee data that must be protected against a wide range of threats. Organizations can reap a great deal of benefits from DSPM in protecting data that is not stored on-premises.


The hidden challenges of AI development no one talks about

Currently, AI developers spend too much of their time (up to 75%) with the "tooling" they need to build applications. Unless they have the technology to spend less time tooling, these companies won't be able to scale their AI applications. To add to technical challenges, nearly every AI startup is reliant on NVIDIA GPU compute to train and run their AI models, especially at scale. Developing a good relationship with hardware suppliers or cloud providers like Paperspace can help startups, but the cost of purchasing or renting these machines quickly becomes the largest expense any smaller company will run into. Additionally, there is currently a battle to hire and keep AI talent. We've seen recently how companies like OpenAI are trying to poach talent from other heavy hitters like Google, which makes the process for attracting talent at smaller companies much more difficult. ... Training a Deep Learning model is almost always extremely expensive. This is a result of the combined function of resource costs for the hardware itself, data collection, and employees. In order to ameliorate this issue facing the industry's newest players, we aim to achieve several goals for our users: Creating an easy-to-use environment, introducing an inherent replicability across our products, and providing access at as low costs as possible.


Transforming code scanning and threat detection with GenAI

The complexity of software components and stacks can sometimes be mind-bending, so it is imperative to connect all these dots in as seamless and hands-free a way as possible. ... If you’re a developer with a mountain of feature requests and bug fixes on your plate and then receive a tsunami of security tickets that nobody’s incentivized to care about… guess which ones are getting pushed to the bottom of the pile? Generative AI-based agentic workflows are sparking the flames of cybersecurity and engineering teams alike to see the light at the end of the tunnel and consider the possibility that SSDLC is on the near-term horizon. And we’re seeing some promising changes already today in the market. Imagine having an intelligent assistant that can automatically track issues, figure out which ones matter most, suggest fixes, and then test and validate those fixes, all at the speed of computing! We still need our developers to oversee things and make the final calls, but the software agent swallows most of the burden of running an efficient program. ... AI’s evolution in code scanning fundamentally reshapes our approach to security. Optimized generative AI LLMs can assess millions of lines of code in seconds and pay attention to even the most subtle and nuanced set of patterns, finding the needle in a haystack, which is almost always by humans.


5 Tips for Optimizing Multi-Region Cloud Configurations

Multi-region cloud configurations get very complicated very quickly, especially for active-active environments where you’re replicating data constantly. Containerized microservice-based applications allow for faster startup times, but they also drive up the number of resources you’ll need. Even active-passive environments for cold backup-and-restore use cases are resource-heavy. You’ll still need a lot of instances, AMI IDs, snapshots, and more to achieve a reasonable disaster recovery turnaround time. ... The CAP theorem forces you to choose only two of the three options: consistency, availability, and partition tolerance. Since we’re configuring for multi-region, partition tolerance is non-negotiable, which leaves a battle between availability and consistency. Yes, you can hold onto both, but you’ll drive high costs and an outsized management burden. If you’re running active-passive environments, opt for consistency over availability. This allows you to use Platform-as-a-Service (PaaS) solutions to replicate your database to your passive region. ... For active-passive environments, routing isn’t a serious concern. You’ll use default priority global routing to support failover handling, end of story. But for active-active environments, you’ll want different routing policies depending on the situation in that region.


Why API-First Matters in an AI-Driven World

Implementing an API-first approach at scale is a nontrivial exercise. The fundamental reason for this is that API-first involves “people.” It’s central to the methodology that APIs are embraced as socio-technical assets, and therefore, it requires a change in how “people,” both technical and non-technical, work and collaborate. There are some common objections to adopting API-First within organizations that raise their head, as well as some newer framings, given the eagerness of many to participate in the AI-hyped landscape. ... Don’t try to design for all eventualities. Instead, follow good extensibility patterns that enable future evolution and design “just enough” of the API based on current needs. There are added benefits when you combine this tactic with API specifications, as you can get fast feedback loops on that design before any investments are made in writing code or creating test suites. ... An API-First approach is powerful precisely because it starts with a use-case-oriented mindset, thinking about the problem being solved and how best to present data that aligns with that solution. By exposing data thoughtfully through APIs, companies can encapsulate domain-specific knowledge, apply business logic, and ensure that data is served securely, self-service, and tailored to business needs. 



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves

Daily Tech Digest - November 17, 2024

Why Are User Acceptance Tests Such a Hassle?

In the reality of many projects, UAT often becomes irreplaceable and needs to be extensive, covering a larger part of the testing pyramid than recommended ... Automated end-to-end tests often fail to cover third-party integrations due to limited access and support, requiring UAT. For instance, if a system integrates with an analytics tool, any changes to the system may require stakeholders to verify the results on the tool as well. ... In industries such as finance, healthcare, or aviation, where regulatory compliance is critical, UATs must ensure that the software meets all legal and regulatory requirements. ... In projects involving intricate business workflows, many UATs may be necessary to cover all possible scenarios and edge cases. ... This process can quickly become complex when dealing with numerous test cases, engineering teams, and stakeholder groups. This complexity often results in significant manual effort in both testing and collaboration. Even though UATs are cumbersome, most companies do not automate them because they focus on validating business requirements and user experiences, which require subjective assessment. However, automating UAT can save testing hours and the effort to coordinate testing sessions.


The full-stack architect: A new lead role for crystalizing EA value

First, the full-stack architect could ensure the function’s other architects are indeed aligned, not only among themselves, but with stakeholders from both the business and engineering. That last bit shouldn’t be overlooked, Ma says. While much attention gets paid to the notion that architects should be able to work fluently with the business, they should, in fact, work just as fluently with Engineering, meaning that whoever steps into the role should wield deep technical expertise, an attribute vital to earning the respect of engineers, and one that more traditional enterprise architects lack. For both types of stakeholders, then, the full-stack architect could serve as a single point of contact. Less “telephone,” as it were. And it could clarify the value proposition of EA as a singular function — and with respect to the business it serves. Finally, the role would probably make a few other architects unnecessary, or at least allow them to concentrate more fully on their respective principal responsibilities. No longer would they have to coordinate their peers. Ma’s inspiration for the role finds its origin in the full-stack engineer, as Ma sees EA today evolving similarly to how software engineering evolved about 15 years ago. 


Groundbreaking 8-Photon Qubit Chip Accelerates Quantum Computing

Quantum circuits based on photonic qubits are among the most promising technologies currently under active research for building a universal quantum computer. Several photonic qubits can be integrated into a tiny silicon chip as small as a fingernail, and a large number of these tiny chips can be connected via optical fibers to form a vast network of qubits, enabling the realization of a universal quantum computer. Photonic quantum computers offer advantages in terms of scalability through optical networking, room-temperature operation, and the low energy consumption. ... The research team measured the Hong-Ou-Mandel effect, a fascinating quantum phenomenon in which two different photons entering from different directions can interfere and travel together along the same path. In another notable quantum experiment, they demonstrated a 4-qubit entangled state on a 4-qubit integrated circuit (5mm x 5mm). Recently, they have expanded their research to 8 photon experiments using an 8-qubit integrated circuit (10mm x 5mm). The researchers plan to fabricate 16-qubit chips within this year, followed by scaling up to 32-qubits as part of their ongoing research toward quantum computation.


Mastering The Role Of CISO: What The Job Really Entails

A big part of a CISO’s job is working effectively with other senior executives. Success isn’t just about technical prowess; it’s about building relationships and navigating the politics of the C-suite. Whether you’re collaborating with the CEO, CFO, CIO, or CLO, you must be able to work within a broader leadership context to align security goals with business objectives. One of the most important lessons I’ve learned is to involve key stakeholders early and often. Don’t wait until you have a finalized proposal to present; get input and feedback from the relevant parties—especially the CTO, CIO, CLO, and CFO—at every stage. This collaborative approach helps you refine your security plans, ensures they are aligned with the company’s broader strategy, and reduces the likelihood of pushback when it’s time to present your final recommendations. ... While technical expertise forms the foundation of the CISO role, much of the work comes down to creative problem-solving. Being a CISO is like being a puzzle solver—you need to look at your organization’s specific challenges, risks, and goals, and figure out how to put the pieces together in a way that addresses both current and future needs.


Why Future-proofing Cybersecurity Regulatory Frameworks Is Essential

As regulations evolve, ensuring the security and privacy of the personal information used in AI training looks set to become increasingly difficult, which could lead to severe consequences for both individuals and organizations. The same survey went on to reveal that 30% of developers believe that there is a general lack of understanding among regulators who are not equipped with the right set of skills to comprehend the technology they're tasked with regulating. With skills and knowledge in question, alongside rapidly advancing AI and cybersecurity threats, what exactly should regulators keep in mind when creating regulatory frameworks that are both adaptable and effective? It's my view that, firstly, regulators should know all the options on the table when it comes to possible privacy-enhancing technologies (PETs). ... Incorporating continuous learning within the organization is also crucial, as well as allowing employees to participate in industry events and conferences to stay up to speed on the latest developments and to meet with experts. Where possible, we should be creating collaborations with the industry — for example, inviting representatives of tech companies to give internal seminars or demonstrations.


AI could alter data science as we know it - here's why

Davenport and Barkin note that generative AI will take citizen development to a whole new level. "First is through conversational user interfaces," they write. "Virtually every vendor of software today has announced or is soon to introduce a generative AI interface." "Now or in the very near future, someone interested in programming or accessing/analyzing data need only make a request to an AI system in regular language for a program containing a set of particular functions, an automation workflow with key steps and decisions, or a machine-learning analysis involving particular variables or features." ... Looking beyond these early starts, with the growth of AI, RPA, and other tools, "some citizen developers are likely to no longer be necessary, and every citizen will need to change how they do their work," Davenport and Barkin speculate. ... "The rise of AI-driven tools capable of handling data analysis, modeling, and insight generation could force a shift in how we view the role and future of data science itself," said Ligot. "Tasks like data preparation, cleansing, and even basic qualitative analysis -- activities that consume much of a data scientist's time -- are now easily automated by AI systems."


Scaling Small Language Models (SLMs) For Edge Devices: A New Frontier In AI

Small language models (SLMs) are lightweight neural network models designed to perform specialized natural language processing tasks with fewer computational resources and parameters, typically ranging from a few million to several billion parameters. Unlike large language models (LLMs), which aim for general-purpose capabilities across a wide range of applications, SLMs are optimized for efficiency, making them ideal for deployment in resource-constrained environments such as mobile devices, wearables and edge computing systems. ... One way to make SLMs work on edge devices is through model compression. This reduces the model’s size without losing much performance. Quantization is a key technique that simplifies the model’s data, like turning 32-bit numbers into 8-bit, making the model faster and lighter while maintaining accuracy. Think of a smart speaker—quantization helps it respond quickly to voice commands without needing cloud processing. ... The growing prominence of SLMs is reshaping the AI world, placing a greater emphasis on efficiency, privacy and real-time functionality. For everyone from AI experts to product developers and everyday users, this shift opens up exciting possibilities where powerful AI can operate directly on the devices we use daily—no cloud required.


How To Ensure Your Cloud Project Doesn’t Fail

To get the best out of your team requires striking a delicate balance between discipline and freedom. A bunch of “computer nerds” might not produce much value if left completely to their own devices. But they also won’t be innovative if not given freedom to explore and mess around with ideas. When building your Cloud team, look beyond technical skills. Seek individuals who are curious, adaptable, and collaborative. These traits are crucial for navigating the ever-changing landscape of Cloud technology and fostering an environment of continuous innovation. ... Culture plays a pivotal role in successful Cloud adoption. To develop the right culture for Cloud innovation, start by clearly defining and communicating your company's values and goals. You should also work to foster an environment that encourages calculated risk-taking and learning from failures as well as promotes collaboration and knowledge sharing across teams. Finally, make sure to incentivise your culture by recognising and rewarding innovation, not just successful outcomes. ... Having a well-defined culture is just the first step. To truly harness the power of your talent, you need to embed your definition of talent into every aspect of your company's processes.


2025 Tech Predictions – A Year of Realisation, Regulations and Resilience

A number of businesses are expected to move workloads from the public cloud back to on-premises data centres to manage costs and improve efficiencies. This is the essence of data freedom – the ability to move and store data wherever you need it, with no vendor lock-in. Organisations that previously shifted to the public cloud now realise that a hybrid approach is more advantageous for achieving cloud economics. While the public cloud has its benefits, local infrastructure can offer superior control and performance in certain instances, such as for resource-intensive applications that need to remain closer to the edge. ... As these threats become more commonplace, businesses are expected to adopt more proactive cybersecurity strategies and advanced identity validation methods, such as voice authentication. The uptake of AI-powered solutions to prevent and prepare for cyberattacks is also expected to increase. ... Unsurprisingly, the continuous profileration of data into 2025 will see the introduction of new AI-focused roles. Chief AI Officers (CAIOs) are responsible for overseeing the ethical, responsible and effective use of AI across organisations and bridging the gap between technical teams and key stakeholders.


In an Age of AI, Cloud Security Skills Remain in Demand

While identifying and recruiting the right tech and security talent is crucial, cybersecurity experts note that organizations must make a conscientious choice to invest in cloud security, especially as more data is uploaded and stored within SaaS apps and third-party, infrastructure-as-a-service (IaaS) providers such as Amazon Web Services and Microsoft Azure. “To close the cloud security skills gap, organizations should prioritize cloud-specific security training and certifications for their IT staff,” Stephen Kowski, field CTO at security firm SlashNext, told Dice. “Implementing cloud-native security tools that provide comprehensive visibility and protection across multi-cloud environments can help mitigate risks. Engaging managed security service providers with cloud expertise can also supplement in-house capabilities and provide valuable guidance.” Jason Soroko, a senior Fellow at Sectigo, expressed similar sentiments when it comes to organizations assisting in building out their cloud security capabilities and developing the talent needed to fulfill this mission. “To close the cloud security skills gap, organizations should offer targeted training programs, support certification efforts and consider hiring experts to mentor existing teams,” Soroko told Dice. 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - November 16, 2024

New framework aims to keep AI safe in US critical infrastructure

According to a release issued by DHS, “this first-of-its kind resource was developed by and for entities at each layer of the AI supply chain: cloud and compute providers, AI developers, and critical infrastructure owners and operators — as well as the civil society and public sector entities that protect and advocate for consumers.” ... Naveen Chhabra, principal analyst with Forrester, said, “while average enterprises may not directly benefit from it, this is going to be an important framework for those that are investing in AI models.” ... Asked why he thinks DHS felt the need to create the framework, Chhabra said that developments in the AI industry are “unique, in the sense that the industry is going back to the government and asking for intervention in ensuring that we, collectively, develop safe and secure AI.” ... David Brauchler, technical director at cybersecurity vendor NCC sees the guidelines as a beginning, pointing out that frameworks like this are just a starting point for organizations, providing them with big picture guidelines, not roadmaps. He described the DHS initiative in an email as “representing another step in the ongoing evolution of AI governance and security that we’ve seen develop over the past two years. It doesn’t revolutionize the discussion, but it aligns many of the concerns associated with AI/ML systems with their relevant stakeholders.”


Building an Augmented-Connected Workforce

An augmented workforce can work faster and more efficiently thanks to seamless access to real-time diagnostics and analytics, as well as live remote assistance, observes Peter Zornio, CTO at Emerson, an automation technology vendor serving critical industries. "An augmented-connected workforce institutionalizes best practices across the enterprise and sustains the value it delivers to operational and business performance regardless of workforce size or travel restrictions," he says in an email interview. An augmented-connected workforce can also help fill some of the gaps many manufacturers currently face, Gaus says. "There are many jobs unfilled because workers aren't attracted to manufacturing, or lack the technological skills needed to fill them," he explains. ... For enterprises that have already invested in advanced digital technologies, the path leading to an augmented-connected workforce is already underway. The next step is ensuring a holistic approach when looking at tangible ways to achieve such a workforce. "Look at the tools your organization is already using -- AI, AR, VR, and so on -- and think about how you can scale them or connect them with your human talent," Gaus says. Yet advanced technologies alone aren't enough to guarantee long-term success.


DORA and why resilience (once again) matters to the board

DORA, though, might be overlooked because of its finance-specific focus. The act has not attracted the attention of NIS2, which sets out cybersecurity standards for 15 critical sectors in the EU economy. And NIS2 came into force in October; CIOs and hard-pressed compliance teams could be forgiven for not focusing on another piece of legislation that is due in the New Year. But ignoring DORA altogether would be short-sighted. Firstly, as Rodrigo Marcos, chair of the EU Council at cybersecurity body CREST points out, DORA is a law, not a framework or best practice guidelines. Failing to comply could lead to penalties. But DORA also covers third-party risks, which includes digital supply chains. The legislation extends to any third party supplying a financial services firm, if the service they supply is critical. This will include IT and communications suppliers, including cloud and software vendors. ... And CIOs are also putting more emphasis on resilience and recovery. In some ways, we have come full circle. Disaster recovery and business continuity were once mainstays of IT operations planning but moved down the list with the move to the cloud. Cyber attacks, and especially ransomware, have pushed both resilience and recovery right back up the agenda.


Data Is Not the New Oil: It’s More Like Uranium

Comparing data to uranium is an accurate analogy. Uranium is radioactive and it is imperative to handle it carefully to avoid radiation exposure, the effects of which are linked to serious health and safety concerns. Issues with the deployment of uranium, such as in reactors, for instance, can lead to radioactive fallouts that are expensive to contain and have long-term health consequences for impacted individuals. The possibility of uranium being stolen poses significant risks and global repercussions. Data exhibits similar characteristics. It is critical for it to be stored safely, and those who experience data theft are forced to deal with long-term consequences – identity theft and financial concerns, for example. An organization experiencing a cyberattack must deal with regulatory oversight and fines. In some cases, losing sensitive data can trigger significant global consequences. ... Maintaining a data chain of custody is paramount. Some companies allow all employees access to all records, which increases the surface area of a cyberattack, and compromised employees could lead to a data breach. Even a single compromised employee computer can lead to a more extensive hack. Consider the case of the nonprofit healthcare network Ascension, which operates 140 hospitals and 40 senior care facilities.


Palo Alto Reports Firewalls Exploited Using an Unknown Flaw

Palo Alto said the flaw is being remotely exploited, has a "critical" severity rating of 9.3 out of 10 on the CVSS scale and that mitigating the vulnerability should be treated with the "highest" urgency. One challenge for users: no patch is yet available to fix the vulnerability. Also, no CVE code has been allocated for tracking it. "As we investigate the threat activity, we are preparing to release fixes and threat prevention signatures as early as possible," Palo Alto said. "At this time, securing access to the management interface is the best recommended action." The company said it doesn't believe its Prisma Access or Cloud NGFW are at risk from these attacks. Cybersecurity researchers confirm that real-world details surrounding the attacks and flaws remain scant. "Rapid7 threat intelligence teams have also been monitoring rumors of a possible zero-day vulnerability, but until now, those rumors have been unsubstantiated," the cybersecurity firm said in a Friday blog post. Palo Alto first warned customers on Nov. 8 that it was investigating reports of a zero-day vulnerability in the management interface for some types of firewalls and urged them to lock down the interfaces. 


Award-winning palm biometrics study promises low-cost authentication

“By harnessing high-resolution mmWave signals to extract detailed palm characteristics,” he continued, “mmPalm presents an ubiquitous, convenient and cost-efficient option to meet the growing needs for secure access in a smart, interconnected world.” The mmPalm method employs mmWave technology, which is widely used in 5G networks, to capture a person’s palm characteristics by sending and analyzing reflected signals and thereby creating a unique palm print for each user. Beyond this, mmPalm also meets the difficulties that can arise in authentication technology like distance and hand orientation. The system uses a type of AI called the Conditional Generative Adversarial Network (cGAN) to learn different palm orientations and distances, and generates virtual profiles to fill in gaps. In addition, the system will adapt to different environments using a transfer learning framework so that mmPalm is suited to various settings. The system also builds virtual antennas to increase the spatial resolution of a commercial mmWave device. Tested with 30 participants over six months, mmPalm displayed a 99 percent accuracy rate and was resistant to impersonation, spoofing and other potential breaches.


Scaling From Simple to Complex Cache: Challenges and Solutions

To scale a cache effectively, you need to distribute data across multiple nodes through techniques like sharding or partitioning. This improves storage efficiency and ensures that each node only stores a portion of the data. ... A simple cache can often handle node failures through manual intervention or basic failover mechanisms. A larger, more complex cache requires robust fault-tolerance mechanisms. This includes data replication across multiple nodes, so if one node fails, others can take over seamlessly. This also includes more catastrophic failures, which may lead to significant down time as the data is reloaded into memory from the persistent store, a process known as warming up the cache. ... As the cache gets larger, pure caching solutions struggle to provide linear performance in terms of latency while also allowing for the control of infrastructure costs. Many caching products were written to be fast at small scale. Pushing them beyond what they were designed for exposes inefficiencies in underlying internal processes. Potential latency issues may arise as more and more data are cached. As a consequence, cache lookup times can increase as the cache is devoting more resources to managing the increased scale rather than serving traffic.


Understanding the Modern Web and the Privacy Riddle

The main question is users’ willingness to surrender their data and not question the usage of this data. This could be attributed to the effect of the virtual panopticon, where users believe they are cooperating with agencies (government or private) that claim to respect their privacy in exchange for services. The Universal ID project (Aadhar project) in India, for instance, began as a means to provide identity to the poor in order to deliver social services, but has gradually expanded its scope, leading to significant function creep. Originally intended for de-duplication and preventing ‘leakages,’ it later became essential for enabling private businesses, fostering a cashless economy, and tracking digital footprints. ... In the modern web, users occupy multiple roles—as service providers, users, and visitors—while adopting multiple personas. This shift requires greater information disclosure, as users benefit from the web’s capabilities and treat their own data as currency. The unraveling of privacy has become the new norm, where withholding information is no longer an option due to the stigmatization of secrecy. Over the past few years, there has been a significant shift in how consumers and websites view privacy. Users have developed a heightened sensitivity to the use of their personal information and now recognize their basic right to internet privacy.


Databases Are a Top Target for Cybercriminals: How to Combat Them

Most ransomware can encrypt pages within a database—Mailto, Sodinokibi (REvil), and Ragnar Locker—and destroy the database pages. This means the slow, unknown encryption of everything, from sensitive customer records to critical networks resources, including Active Director, DNS, and Exchange, and lifesaving patient health information. Because databases can continue to run even with corrupted pages, it can take longer to realize that they have been attacked. Most often, it is the wreckage of the attack that is usually found when the database is taken down for routine maintenance, and by that time, thousands of records could be gone. Databases are an attractive target for cybercriminals because they offer a wealth of information that can be used or sold on the dark web, potentially leading to further breaches and attacks. Industries such as healthcare, finance, logistics, education, and transportation are particularly vulnerable. The information contained in these databases is highly valuable, as it can be exploited for spamming, phishing, financial fraud, and tax fraud. Additionally, cybercriminals can sell this data for significant sums of money on dark web auctions or marketplaces.


The Impact of Cloud Transformation on IT Infrastructure

With digital transformation accelerating across industries, the IT ecosystem comprises traditional and cloud-native applications. This mixed environment demands a flexible, multi-cloud strategy to accommodate diverse application requirements and operational models. The ability to move workloads between public and private clouds has become essential, allowing companies to dynamically balance performance and cost considerations. We are committed to delivering cloud solutions supporting seamless workload migration and interoperability, empowering businesses to leverage the best of public and private clouds. ... With today’s service offerings and various tools, migrating between on-premises and cloud environments has become straightforward, enabling continuous optimization rather than one-time changes. Cloud-native applications, particularly containerization and microservices, are inherently optimized for public and private cloud setups, allowing for dynamic scaling and efficient resource use. To fully optimize, companies should adopt cloud-native principles, including automation, continuous integration, and orchestration, which streamline performance and resource efficiency. Robust tools like identity and access management (IAM), encryption, and automated security updates address security and reliability, ensuring compliance and data protection.



Quote for the day:

"The elevator to success is out of order. You’ll have to use the stairs…. One step at a time.” -- Rande Wilson

Daily Tech Digest - November 15, 2024

Beyond the breach: How cloud ransomware is redefining cyber threats in 2024

Unlike conventional ransomware that targets individual computers or on-premises servers, attackers are now setting their sights on cloud infrastructures that host vast amounts of data and critical services. This evolution represents a new frontier in cyber threats, requiring Indian cybersecurity practitioners to rethink and relearn defence strategies. Traditional security measures and last year’s playbooks are no longer sufficient. Attackers are exploiting misconfigured or poorly secured cloud storage platforms such as Amazon Web Services (AWS) Simple Storage Service (S3) and Microsoft Azure Blob Storage. By identifying cloud storage buckets with overly permissive access controls, cybercriminals gain unauthorised entry, copy data to their own servers, encrypt or delete the original files, and then demand a ransom for their return. ... Collaboration and adaptability are essential. By understanding the unique challenges posed by cloud security, Indian organisations can implement comprehensive strategies that not only protect against current threats but also anticipate future ones. Proactive measures—such as strengthening access controls, adopting advanced threat detection technologies, training employees, and staying informed—are crucial steps in defending against these evolving attacks.


Harnessing AI’s Potential to Transform Payment Processing

There are many use cases that show how AI increases the speed and convenience of payment processing. For instance, Apple Pay now offers biometric authentication, which uses AI facial recognition and fingerprint scanning to authenticate users. This enables mobile payment customers to use quick and secure authentication without remembering passwords or PINs. Similarly, Apple Pay’s competitor, PayPal, uses AI for real-time fraud detection, employing ML algorithms to monitor transactions for signs of fraud and ensure that customers’ financial information remains secure. ... One issue is AI systems rely on massive amounts of data, including sensitive data, which can lead to data breaches, identity theft, and compliance issues. In addition, AI algorithms trained on biased data can perpetuate those biases. Making matters worse, many AI systems lack transparency, so the bias may grow and lead to unequal access to financial services. Another issue is the potential dependence on outside vendors, which is common with many AI technologies. ... To reduce the current risks associated with AI and safely unleash its full potential to improve payment processing, it is imperative for organizations to take a multi-layered approach that includes technical safeguards, organizational policies, and regulatory compliance. 


Do you need an AI ethicist?

The goal of advising on ethics is not to create a service desk model, where colleagues or clients always have to come back to the ethicist for additional guidance. Ethicists generally aim for their stakeholders to achieve some level of independence. “We really want to make our partners self-sufficient. We want to teach them to do this work on their own,” Sample said. Ethicists can promote ethics as a core company value, no different from teamwork, agility, or innovation. Key to this transformation is an understanding of the organization’s goal in implementing AI. “If we believe that artificial intelligence is going to transform business models…then it becomes incumbent on an organization to make sure that the senior executives and the board never become disconnected from what AI is doing for or to their organization, workforce, or customers,” Menachemson said. This alignment may be especially necessary in an environment where companies are diving head-first into AI without any clear strategic direction, simply because the technology is in vogue. A dedicated ethicist or team could address one of the most foundational issues surrounding AI, notes Gartner’s Willemsen. One of the most frequently asked questions at a board level, regardless of the project at hand, is whether the company can use AI for it, he said. 


Why We Need Inclusive Data Governance in the Age of AI

Inclusive data governance processes involve multiple stakeholders, giving equal space in this decision making to diverse groups from civil society, as well as space for direct representation of affected communities as active stakeholders. This links to, but is an idea broader than, the concept of multi-stakeholder governance for technology, which first came to prominence at the international level, in institutions such as the Internet Corporation for Assigned Names and Numbers and the Internet Governance Forum. ... Involving the public and civil society in decisions about data is not cost-free. Taking the steps that are needed to surmount the practical challenges, and skepticism about the utility of public involvement in a technical and technocratic field, frequently requires arguments that go beyond it being the right thing to do. ... The risks for people, communities and society, but also for organizations operating within the data and AI marketplace and supply chain, can be reduced through greater inclusion earlier in the design process. But organizational self-interest will not motivate the scope or depth that is required. Reducing the reality and perception of “participation-washing” means requirements for consultation in the design of data and AI systems need to be robust and enforceable. 


Strategies to navigate the pitfalls of cloud costs

If cloud customers spend too much money, it’s usually because they created cost-ineffective deployments. It’s common knowledge that many enterprises “lifted and shifted” their way to the clouds with little thought about how inefficient those systems would be in the new infrastructure. ... Purposely or not, public cloud providers created intricate pricing structures that are nearly incomprehensible to anyone who does not spend each day creating cloud pricing structures to cover every possible use. As a result, enterprises often face unexpected expenses. Many of my clients frequently complain that they have no idea how to manage their cloud bills because they don’t know what they’re paying for. ... Cloud providers often encourage enterprises to overprovision resources “just in case.” Enterprises still pay for that unused capacity, so the misalignment dramatically elevates costs without adding business value. When I ask my clients why they provision so much more storage or computing resources beyond what their workload requires, the most common answer is, “My cloud provider told me to.” ... One of the best features of public cloud computing is autoscaling so you’ll never run out of resources or suffer from bad performance due to insufficient resource provisioning. However, autoscaling often leads to colossal cloud bills because it often is triggered without good governance or purpose.


Your IT Team Isn't Ready For Change Management If They Can't Answer These 3 Questions

Testing software before you encounter failure rates is key, but never should you be exposed to failure rates with this level of real world impact. Whether it’s due to third party systems or the companies themselves, their brand will be the one in tatters due to the end customer experience. Enter Change Management and the possibility for, if done right, the prevention of these kinds of enormous IT failures. ... The ever-evolving nature of technology, including cloud scaling, infrastructure as code, and frequent updates such as ‘Patch Tuesday’ means that organisations must constantly adapt to change. However, this constant change introduces challenges such as “drift”—a term that refers to the unplanned deviations from standard configurations or expected states within an IT environment. Think of it like a pesky monkey in the machine. Drift can occur subtly and often goes unnoticed until it causes significant disruptions. It also increases uncertainty and doubt in the organisation making Change Management and Release Management harder, creating difficulties to plan and execute changes safely. ... To be effective, Change Management needs to be able to detect and understand drift in the environment to have a full understanding of Current State, Risk Assessment and Expected Outcomes. 


RIP Open Core — Long Live Open Source

Open-core was originally popular because it allowed companies to build a community around a free product version while charging for a more full, enterprise-grade version. This setup thrived in the 2010s, helping companies like MongoDB and Redis gain traction. But times have changed, and today, instead of enhancing a company’s standing, open-core models often create more problems than they solve. ... While open-core and source-available models had their moment, companies are beginning to realize the importance of true open source values and are finding their way back. This return to open source is a sign of growth, with businesses realigning with the collaborative spirit at the core (wink) of the OSS community. More companies are adopting models that genuinely prioritize community engagement and transparency rather than using them as marketing or growth tactics. ... As the open-core model fades, we’re seeing a more sustainable approach take shape: the Open-Foundation model. This model allows the open-source offering to be the backbone of a commercial offering without compromising the integrity of the OSS project. Rather, it reinforces it as a valuable, standalone product that supports the commercial offering instead of competing against it.


Why SaaS Backup Matters: Protecting Data Beyond Vendor Guarantees

Most IT departments have long recognized the importance of backup and recovery for applications and data that they host themselves. When no one else is managing your workloads and backing them up, having a recovery plan to restore them if necessary is essential for minimizing the risk that a failure could disrupt business operations. But when it comes to SaaS, IT operations teams sometimes think in different terms. That's because SaaS applications are hosted and managed by external vendors, not the IT departments of the businesses that use SaaS apps. In many cases, SaaS vendors provide uptime or availability guarantees. They don't typically offer details about exactly how they back up applications and data or how they'll recover data in the event of a failure, but a backup guarantee is typically implicit in SaaS products. ... Historically, SaaS apps haven't featured prominently, if at all, in backup and recovery strategies. But the growing reliance on SaaS apps — combined with the many risks that can befall SaaS application data even if the SaaS vendor provides its own backup or availability guarantees — makes it critical to integrate SaaS apps into backup and recovery plans. The risks of not backing up SaaS have simply become too great.


Biometrics in the Cyber World

Biometrics is known to strengthen security in many ways. Some ways this could include stronger authentication, user convenience, and reduction of risk of identity theft. With the uniqueness of the user, this adds a layer of security to authentication. Traditional authentication, such as passwords, contains weak combinations that can easily be breached, so using biometrics can prevent that. People constantly forget their passwords and always end up resetting them. Since biometrics is entirely connected with one’s identity, it will no longer be an inconvenience to forget a password. Since it is very usual for hackers to attempt to get into an account by guessing the password, biometrics does not accommodate this since a hacker cannot guess a password when the uniqueness of one’s identity is what is sought for authentication. This is one of the pros of biometric systems, which should reduce the risk of identity theft. Some challenges that biometrics could have include privacy concerns, false positives and negatives, and bias. Since it is personal information, biometric data can lead to privacy violations. Storage of personal data, which is sensitive, needs to be following the regulations on privacy such as GDPR and CCPA. 


Data Architectures in the AI Era: Key Strategies and Insights

Accelerating results in data architecture initiatives can be achieved in a much quicker fashion if you start with the minimum needed and build from there for your data storage. Begin by considering all use cases and finding the one component needed to develop so a data product can be delivered. Expansion can happen over time with use and feedback, which will actually create a more tailored and desirable product. ... Educating your key personnel on the importance of being able and ready to make the shift from previously familiar legacy data systems to modern architectures like data lakehouses or hybrid cloud platforms. Migration to a unified, hybrid, or cloud-based data management system may seem challenging initially, but it is essential for enabling comprehensive data lifecycle management and AI-readiness. By investing in continuous education and training, organizations can enhance data literacy, simplify processes, and improve long-term data governance, positioning themselves for scalable and secure analytics practices. ... By being prepared for the typical challenges of AI, problems can be predicted and anticipated which can help to reduce downtime and frustration in the modernization of data architecture. 



Quote for the day:

“The final test of a leader is that he leaves behind him in other men the conviction and the will to carry on.” – Walter Lippmann

Daily Tech Digest - November 14, 2024

Where IT Consultancies Expect to Focus in 2025

“Much of what’s driving conversations around AI today is not just the technology itself, but the need for businesses to rethink how they use data to unlock new opportunities,” says Chaplin. “AI is part of this equation, but data remains the foundation that everything else builds upon.” West Monroe also sees a shift toward platform-enabled environments where software, data, and platforms converge. “Rather than creating everything from scratch, companies are focusing on selecting, configuring, and integrating the right platforms to drive value. The key challenge now is helping clients leverage the platforms they already have and making sure they can get the most out of them,” says Chaplin. “As a result, IT teams need to develop cross-functional skills that blend software development, platform integration and data management. This convergence of skills is where we see impact -- helping clients navigate the complexities of platform integration and optimization in a fast-evolving landscape.” ... “This isn’t just about implementing new technologies, it’s about preparing the workforce and the organization to operate in a world where AI plays a significant role. ...”


How Is AI Shaping the Future of the Data Pipeline?

AI’s role in the data pipeline begins with automation, especially in handling and processing raw data – a traditionally labor-intensive task. AI can automate workflows and allow data pipelines to adapt to new data formats with minimal human intervention. With this in mind, Harrisburg University is actively exploring AI-driven tools for data integration that leverage LLMs and machine learning models to enhance and optimize ETL processes, including web scraping, data cleaning, augmentation, code generation, mapping, and error handling. These adaptive pipelines, which automatically adjust to new data structures, allow companies to manage large and evolving datasets without the need for extensive manual coding. ... Beyond immediate operational improvements, AI is shaping the future of scalable and sustainable data pipelines. As industries collect data at an accelerating rate, traditional pipelines often struggle to keep pace. AI’s ability to scale data handling across various formats and volumes makes it ideal for supporting industries with massive data needs, such as retail, logistics, and telecommunications. In logistics, for example, AI-driven pipelines streamline inventory management and optimize route planning based on real-time traffic data. 


Innovating with Data Mesh and Data Governance

Companies choose a data mesh to overcome the limitations of “centralized and monolithic” data platforms, as noted by Zhamak Dehghani, the director of emerging technologies at Thoughtworks. Technologies like data lakes and warehouses try to consolidate all data in one place, but enterprises can find that the data gets stuck there. A company might have only one centralized data repository – typically a team such as IT – that serves the data up to everyone else in the company. This slows down data access because of bottlenecks. For example, having already taken days to get HR privacy approval, the finance department’s data access requests might then sit in the inbox of one or two people in IT for additional days. Instead, a data mesh puts data control in the hands of each domain that serves that data. Subject matter experts (SMEs) in the domain control how this data is organized, managed, and delivered. ... Data mesh with federated Data Governance balances expertise, flexibility, and speed with data product interoperability among different domains. With a data mesh, the people with the most knowledge about their subject matter take charge of their data. In the future, organizations will continue to face challenges in providing good, federated Data Governance to access data through a data mesh.


The Agile Manifesto was ahead of its time

A fundamental idea of the agile methodology is to alleviate this and allow for flexibility and changing requirements. The software development process should ebb and flow as features are developed and requirements change. The software should adapt quickly to these changes. That is the heart and soul of the whole Agile Manifesto. However, when the Agile Manifesto was conceived, the state of software development and software delivery technology was not flexible enough to fulfill what the manifesto was espousing. But this has changed with the advent of the SaaS (software as a service) model. It’s all well and good to want to maximize flexibility, but for many years, software had to be delivered all at once. Multiple features had to be coordinated to be ready for a single release date. Time had to be allocated for bug fixing. The limits of the technology forced software development teams to be disciplined, rigid, and inflexible. Delivery dates had to be met, after all. And once the software was delivered, changing it meant delivering all over again. Updates were often a cumbersome and arduous process. A Windows program of any complexity could be difficult to install and configure. Delivering or upgrading software at a site with 200 computers running Windows could be a major challenge.


Improving the Developer Experience by Deploying CI/CD in Databases

Characteristically less mature than CI/CD for application code, CI/CD for databases enables developers to manage schema updates such as changes to table structures and relationships. This management ability means developers can execute software updates to applications quickly and continuously without disrupting database users. It also helps improve quality and governance, creating a pipeline everyone follows. The CI stage typically involves developers working on code simultaneously, helping to fix bugs and address integration issues in the initial testing process. With the help of automation, businesses can move faster, with fewer dependencies and errors and greater accuracy — especially when backed up by automated testing and validation of database changes. Human intervention is not needed, resulting in fewer hours spent on change management. ... Deploying CI/CD for databases empowers developers to focus on what they do best: Building better applications. Businesses today should decide when, not if, they plan to implement these practices. For development leaders looking to start deploying CI/CD in databases, standardization — such as how certain things are named and organized — is a solid first step and can set the stage for automation in the future. 


To Dare or not to Dare: the MVA Dilemma

Business stakeholders must understand the benefits of technology experiments in terms they are familiar with, regarding how the technology will better satisfy customer needs. Operations stakeholders need to be satisfied that the technology is stable and supportable, or at least that stability and supportability are part of the criteria that will be used to evaluate the technology. Wholly avoiding technology experiments is usually a bad thing because it may miss opportunities to solve business problems in a better way, which can lead to solutions that are less effective than they would be otherwise. Over time, this can increase technical debt. ... These trade-offs are constrained by two simple truths: the development team doesn’t have much time to acquire and master new technologies, and they cannot put the business goals of the release at risk by adopting unproven or unsustainable technology. This often leads the team to stick with tried-and-true technologies, but this strategy also has risks, most notably those of the hammer-nail kind in which old technologies that are unsuited to novel problems are used anyway, as in the case where relational databases are used to store graph-like data structures.


2025 API Trend Reports: Avoid the Antipattern

Modern APIs aren’t all durable, full-featured products, and don’t need to be. If you’re taking multiple cross-functional agile sprints to design an API you’ll use for less than a year, you’re wasting resources building a system that will probably be overspecified and bloated. The alternative is to use tools and processes centered around an API developer’s unit of work, which is a single endpoint. No matter the scope or lifespan of an API, it will consist of endpoints, and each of those has to be written by a developer, one at a time. It’s another way that turning back to the fundamentals can help you adapt to new trends. ... Technology will keep evolving, and the way we employ AI might look quite different in a few years. Serverless architecture is the hot trend now, but something else will eventually overtake it. No doubt, cybercriminals will keep surprising us with new attacks. Trends evolve, but underlying fundamentals — like efficiency, the need for collaboration, the value of consistency and the need to adapt — will always be what drives business decisions. For the API industry, the key to keeping up with trends without sacrificing fundamentals is to take a developer-centric approach. Developers will always create the core value of your APIs. 


The targeted approach to cloud & data - CIOs' need for ROI gains

AI and DaaS are part of the pool of technologies that Pacetti also draws on, and the company also uses AI provided by Microsoft, both with ChatGPT and Copilot. Plus, AI has been integrated into the e-commerce site to support product research and recommendations. But there’s an even more essential area for Pacetti.“With the end of third-party cookies, AI is now essential to exploit the little data we can capture from the internet user browsing who accept tracking,” he says. “We use Google’s GA4 to compensate for missing analytics data, for example, by exploiting data from technical cookies.” ... CIOs discuss sales targets with CEOs and the board, cementing the IT and business bond. But another even more innovative aspect is to not only make IT a driver of revenues, but also have it measure IT with business indicators. This is a form of advanced convergence achieved by following specific methodologies. Sondrio People’s Bank (BPS), for example, adopted business relationship management, which deals with translating requests from operational functions to IT and, vice versa, bringing IT into operational functions. BPS also adopts proactive thinking, a risk-based framework for strategic alignment and compliance with business objectives. 


Hidden Threats Lurk in Outdated Java

How important are security updates? After all, Java is now nearly 30 years old; haven’t we eliminated all the vulnerabilities by now? Sadly not, and realistically, that will never happen. OpenJDK contains 7.5 million lines of code and relies on many external libraries, all of which can be subject to undiscovered vulnerabilities. ... Since Oracle changed its distributions and licensing, there have been 22 updates. Of these, six PSUs required a modification and new release to address a regression that had been introduced. The time to create the new update has varied from just under two weeks to over five weeks. At no time have any of the CPUs been affected like this. Access to a CPU is essential to maintain the maximum level of security for your applications. Since all free binary distributions of OpenJDK only provide the PSU version, some users may consider a couple of weeks before being able to deploy as an acceptable risk. ... When an update to the JDK is released, all vulnerabilities addressed are disclosed in the release notes. Bad actors now have information enabling them to try and find ways to exploit unpatched applications.


How to defend Microsoft networks from adversary-in-the-middle attacks

Depending on the impact of the attack, start the cleanup process. Start by forcing a password change on the user account, ensuring that you have revoked all tokens to block the attacker’s fake credentials. If the consequences of the attack were severe, consider disabling the user’s primary account and setting up a new temporary account as you investigate the extent of the intrusion. You may even consider quarantining the user’s devices and potentially taking forensic-level backups of workstations if you are unsure of the original source of the intrusion so you can best investigate. Next review all app registrations, changes to service principals, enterprise apps, and anything else the user may have changed or impacted since the time the intrusion was noted. You’ll want to do a deep investigation into the mailbox’s access and permissions. Mandiant has a PowerShell-based script that can assist you in investigating the impact of the intrusion “This repository contains a PowerShell module for detecting artifacts that may be indicators of UNC2452 and other threat actor activity,” Mandiant notes. “Some indicators are ‘high-fidelity’ indicators of compromise, while other artifacts are so-called ‘dual-use’ artifacts.”



Quote for the day:

"To think creatively, we must be able to look afresh to at what we normally take for granted." -- George Kneller