Daily Tech Digest - December 05, 2023

Post-Quantum Cryptography: The lynchpin of future cybersecurity

Since we are still at least a decade away from an ideal quantum computer, this may not seem like an imminent threat. However, this is not the case, since Annealing quantum computers are already a reality. While these are not capable of utilising Shor’s algorithm, they can solve the factoring problem by formulating it as an optimization problem and have already made much progress. Furthermore, there is also the problem of “harvest now, decrypt later,” which essentially means that an attacker can steal data now, wait until quantum computers become a practical reality, and subsequently decrypt it at a later time. This implies that quantum computers already pose a very real threat, without even coming into existence. There is a distinct possibility that large amounts of data have already been compromised and the rectification of this problem is an immediate concern, which is why the incorporation of PQC into current encryption protocols is absolutely imperative. For instance, according to IBM’s “Cost of a data breach Report 2023,” more than 95 percent of the organisations studied globally have experienced more than one data breach.


Payments for net zero – How the payments industry can contribute towards decarbonisation

It is crucial to involve senior leaders in comprehending the compelling reasons for both commercial and societal urgency to decarbonise. Furthermore, gaining insight into how various stakeholders (ranging from employees, investors, and regulators to civil society) are progressively aligning with the necessity for businesses and society to undergo decarbonisation will fortify the approach. This alignment creates a potent mandate and a unique opportunity for the payment network to discern and investigate its distinct role in facilitating the transition toward net zero. ... Payment networks & fintechs should allocate sufficient resources to explore alignment between their core capabilities and sectors/systems needing to decarbonise. This may involve investing in sustainability and climate change expertise within core teams such as data, product innovation, and strategy. Additionally, conducting robust research on trends and carbon impacts in various economic sub-sectors can help overlay payment networks’ capacities to pinpoint net-zero solutions. Engaging with external stakeholders can also aid in identifying and testing potential opportunity areas.


How AI-assisted code development can make your IT job more complicated

Increased use of AI will also mean personalization becomes an important skill for developers. Today's applications "need to be more intuitive and built with the individual user in mind versus a generic experience for all," says Lobo. "Generative AI is already enabling this level of personalization, and most of the coding in the future will be developed by AI." Despite the rise of generative technology, humans will still be required at key points in the development loop to assure quality and business alignment. "Traditional developers will be relied upon to curate the training data that AI models use and will examine any discrepancies or anomalies," Lobo adds. Technology managers and professionals will need to assume more expansive roles within the business side to ensure that the increased use of AI-assisted code development serves its purpose. We can expect this focus on business requirements to lead to a growth in responsibility via roles such as "ethical AI trainer, machine language engineer, data scientist, AI strategist and consultant, and quality assurance," says Lobo. 


The more the CIO can function as a centralized source for technology resources, the better, says Ping Identity’s Cannava, who sees this transpiring in three phases, depending on the maturity of the organization. In Phase 1, the CIO is the clearinghouse for current technology projects, taking on the traditional role as in-house consultant. In Phase 2, the CIO becomes the clearinghouse for data within the organization. “In many cases, we are the keepers of the keys to datasets,” he says. “We have the ability to bring datasets together, and those insights could drive what the agenda is for the business. They could show us where we have the opportunity to improve our go-to-market. So having that access to the insights driving business intelligence initiatives has allowed us to expand our seat at the table.” In Phase 3, the CIO also becomes the clearinghouse for emerging technologies. Because, he says, to truly unlock the potential of all that data, you need artificial intelligence. And that raises some immediate questions for CIOs who want to be orchestrators. 


How DoorDash Migrated from Aurora Postgres to CockroachDB

Until the monolith was broken up, it offered a single view of the toll that demand for the application was taking on the databases. But once that monolith was broken into microservices, that visibility would disappear. “Our biggest enemy was the single primary architecture of our database,” Salvatori said. “And our North Star would be to move to a solution that offered multiple writers.” In the meantime, the DoorDash team adopted a “poor man’s solution,” approach to dealing with its overmatched database architecture, Salvatori told the Roachfest audience: building vertical federation of tables, while not blocking microservices extractions. In this game of “whack-a-mole,” he said, “Different tables would be able to get their own single writer and therefore scale a little bit and allow us to keep the lights on for a little bit longer. But we needed to take steps toward limitless horizontal scalability.” Cockroach, a distributed SQL database management system, seemed like the right answer.


Taming the Virtual Threads: Embracing Concurrency With Pitfall Avoidance

When a virtual thread needs to process a long computation, the virtual thread excessively occupies its carrier thread, preventing other virtual threads from utilizing that carrier thread. For example, when a virtual thread repeatedly performs blocking operations, such as waiting for input or output, it monopolizes the carrier thread, preventing other virtual threads from making progress. Inefficient resource management within the virtual thread can also lead to excessive resource utilization, causing monopolization of the carrier thread. Monopolization can have detrimental effects on the performance and scalability of virtual thread applications. It can lead to increased contention for carrier threads, reduced overall concurrency, and potential deadlocks. To mitigate these issues, developers should strive to minimize monopolization by breaking down lengthy computations into smaller, more manageable tasks to allow other virtual threads to interleave and utilize the carrier thread.


The all-flash datacentre: Mirage or imminent reality?

The initial advantage of flash over HDDs was speed. Flash was adopted in workstations and laptops, and in enterprise servers running performance-critical and especially I/O-dependent applications. Flash’s performance edge is greatest on random reads and writes. The gap is narrower for sequential read/write operations. A well-configured HDD array with flash-based caching comes close enough to all-flash speeds in real-world environments. “It does depend what infrastructure you have and what characteristics you are looking for from your storage,” says Roy Illsley, chief analyst of IT operations at Omdia. “That includes performance on read, on writes, capacity. The most appropriate [storage] for your needs could be flash, or just as equally spinning media. All flash datacentres may be a reality where workloads require the strength of flash, but I am not expecting all-flash datacentres to become commonplace.” According Rainer Kaise, senior manager of business development at Toshiba Electronics Europe – a hard drive manufacturer – 85% of the world’s online media is still stored on HDDs.


How cybersecurity teams should prepare for geopolitical crisis spillover

It is one thing to understand why geopolitical spillover impacts private enterprise but another to be able to assign any kind of probability of risk to them. Fortunately, research on global cyber conflict and enterprise cybersecurity provide a reasonable starting point for dealing with this uncertainty. Scholars and policy commentators are interested in linking the realities of cyber operations to situational risk profiles, particularly for non-degradation threats for which traditional security assessment processes tend to be sufficient. Performative attacks come with perhaps the most obvious set of threat indicators. Companies that are "named and shamed" during geopolitical crisis moments tend to have one of two characteristics. First, their symbolic profile is constitutionally indivisible in the context of the current conflict. This means that a firm from its statements, actions, or productions clearly underwrites one side in conflict. Media organizations that consistently toe a national line such as Russia's Pravda are an example of this, but so are firms with leaders or major stakeholders belonging to ethnic, religious, or linguistic backgrounds pertinent to a crisis


Can cloud computing be truly federated?

The core idea is to save money, but it requires accepting that the physical resources could be scattered in any system willing to be part of the federated cloud. I’m not going to think about this in silly ways, in that we’re going to take over someone’s smartwatch as a peer node, but there is a vast quantity of underutilized hardware out there, still running and connected to a network in an enterprise data center that could be leveraged for this model. The idea of a federated public cloud service does exist today at varying degrees of maturation, so please don’t send me an angry email telling me your product has been doing this for years and that I’m somehow a bad person for not knowing it existed. As I said, federation is an old architectural concept many have adopted. What is new is bringing it to a widely used public cloud computing platform, which we haven’t seen yet for the most part. In this approach, a centralized system coordinates the provisioning of traditional cloud services such as storage and computing between the requesting peer and a peer that could provide that service.


How AI is revolutionizing “shift left” testing in API security

SAST and DAST are well-established web application tools that can be used for API testing. However, the complex workflows associated with APIs can result in an incomplete analysis by SAST. At the same time, DAST cannot provide an accurate assessment of the vulnerability of an API without more context on how the API is expected to function correctly, nor can it interpret what constitutes a successful business logic attack. In addition, while security and AppSec teams are at ease with SAST and DAST, developers can find them challenging to use. Consequently, we’ve seen API-specific test tooling gain ground, enabling things like continuous validation of API specifications. API security testing is increasingly being integrated into the API security offering, translating into much more efficient processes, such as automatically associating appropriate APIs with suitable test cases. A major challenge with any application security test plan is generating test cases tailored explicitly for the apps being tested before release. 



Quote for the day:

"The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself." -- Mark Caine

Daily Tech Digest - December 04, 2023

Proactive, not reactive: the path to ensuring operational resilience in cybersecurity

Operational resilience goes beyond ensuring business continuity by mitigating disruptions as and when they occur. Resilience needs a proactive approach to maintaining stable and reliable digital systems, regardless of the severity of threat incidents. This "bankability" (excuse the pun) of the financial system is critical to preserving public trust and confidence in the global financial system. Given the interconnectedness of financial firms with external third parties, any plan for operational resilience needs to address multiple lines of communication, automated systems of interactions and information sharing, and a growing attack surface. ... The dependence of the financial sector on the telecom and energy industries, and the increasingly global nature of the sector means that operational resilience exercises need to not just be cross-border, but cross-sector too. Today, national or even global-level threats are a reality, emphasizing the need to include government partners in the exercises. After all, protecting critical private infrastructure safeguards a nation's financial stability.


Black-Box, Gray Box, and White-Box Penetration Testing: Importance and Uses

Grey-box penetration testing can simulate advanced persistent threat (APT) scenarios in which the attacker is highly sophisticated and operates on a longer time scale (CISA, 2023). In these types of attacks, the threat actor has collected a good deal of information about the target system—similar to a gray-box testing scenario. Grey-box penetration testing allows many organizations to strike the right balance between white-box and black-box testing. ... The main disadvantage of gray-box testing is that it can be too “middle-of-the-road” when compared with black-box or white-box testing. If organizations do not strike the right balance during gray-box testing, they may miss crucial insights that would have been found with a different technique. ... Black box, grey box, and white box testing are all valuable forms of penetration testing, each with its own pros, cons, and use cases. Penetration testers need to be familiar with the importance and use cases of each type of test to execute them most efficiently, using the right tools for each one.


The arrival of genAI could cover critical skills gaps, reshape IT job market

While genAI offers the promise of clear business benefits, education is key and collaboration with cybersecurity and risk experts is needed to help establish an environment where the technology can be used safely, securely, and productively, according to Emm. Hurdles to adopting AI persist. Those issues include high costs, uncertain return on investment (ROI), the need to upskill entire staffs, and potential exposure of sensitive corporate data to unfamiliar automation technology. Few organizations, however, have put appropriate safeguards in place to guard against some of genAI's most well-known flaws, such as hallucinations, exposure of corporate data, and data errors. Most are leaving themselves wide open to the acknowledged risks of using genAI, according to Kaspersky. For example, only 22% of C-level executives have discussed putting rules in place to regulate the use of genAI in their organizations — even as they eye it as a way of closing the skills gap. Cisco CIO Fletcher Previn, whose team is working to embed AI in back-end systems and products, said it's critical to have the policies, security, and legal guardrails in place to be able to "safely adopt and embrace AI capabilities other vendors are rolling out into other people’s tools.


State of Serverless Computing and Event Streaming in 2024

Traditional stream processing usually involves an architecture with many moving parts managing distributed infrastructure and using a complex stream processing engine. For instance, Apache Spark, one of the most popular processing engines, is notoriously difficult to deploy, manage, tune and debug (read more about the good, bad and ugly of using Spark). Implementing a reliable, scalable stream processing capability can take anywhere between a few days and a few weeks, depending on the use case. On top of that, you also need to deal with continuous monitoring, maintenance and optimization. You may even need a dedicated team to handle this overhead. All in all, traditional stream processing is challenging, expensive and time consuming. In contrast, serverless stream processing eliminates the headache of managing a complex architecture and the underlying infrastructure. It’s also more cost effective, since you pay only for the resources you use. It’s natural that serverless stream processing solutions have started to appear. 


The Glaring Gap in Your Cybersecurity Posture: Domain Security

Because domain names are used for marketing and brand initiatives, security teams may feel that protecting online domain names falls under the marketing or legal side of the business. Or, they may have left domain protection in the hands of their IT department. But, if organizations are unfamiliar with who their domain registrars even are, chances are they are unaware of the policies the registrars use and the security measures they have in place for branded, trademarked domains. Domain security should be an essential branch of cybersecurity, protecting brands online, but it is not always the highest priority for consumer-grade domain registrars. Unfortunately, adversaries are privy to the growth in businesses’ online presence and the often minimal attention given to domain security, leading them to take a special interest in targeting corporate and/or government domain names that are left exposed. Organizations will continue to find themselves in the path of a perfect storm for domain and DNS attacks and potential financial or reputational devastation if they continue to allow the build-up of blind spots in their security posture.


Put guardrails around AI use to protect your org, but be open to changes

While a seasoned CISO might recognize that the output from ChatGPT in response to a simple security question is malicious, it’s less likely that another member of staff will have the same antenna for risk. Without regulations in place, any employee could be inadvertently stealing another company’s or person’s intellectual property (IP), or they could be delivering their own company’s IP into an adversary’s hands. Given that LLMs store user input as training data, this could contravene data privacy regulations, including GDPR. Developers are using LLMs to help them write code. When this is ingested, it can reappear in response to a prompt from another user. There is nothing that the original developer can do to control this because the LLM was used to help create the code, making it highly unlikely that they can prove ownership of it. This might be mitigated by using a GenAI license which helps enterprises to guard against their code being used as an input for training. However, in these circumstances, imposing a “trust but verify” approach is a good idea.


Why Generative AI Threatens Hospital Cybersecurity — and How Digital Identity Can Be One of Its Greatest Defenses

Writing convincing deceptive messages isn’t the only task cyber attackers use ChatGPT for. The tool can also be prompted to build mutating malicious code and ransomware by individuals who know how to circumvent its content filters. It’s difficult to detect and surprisingly easy to pull off. Ransomware is particularly dangerous to healthcare organizations as these attacks typically force IT staff to shut down entire computer systems to stop the spread of the attack. When this happens, doctors and other healthcare professionals must go without crucial tools and shift back to using paper records, resulting in delayed or insufficient care which can be life-threatening. Since the start of 2023, 15 healthcare systems operating 29 hospitals have been targeted by a ransomware incident, with data stolen from 12 of the 15 healthcare organizations affected. This is a serious threat that requires serious cybersecurity solutions. And generative AI isn’t going anywhere — it’s only picking up speed. It is imperative that hospitals lay thorough groundwork to prevent these tools from giving bad actors a leg up.


15 Essential Data Mining Techniques

The essence of data mining lies in the fundamental technique of tracking patterns, a process integral to discerning and monitoring trends within data. This method enables the extraction of intelligent insights into potential business outcomes. For instance, upon identifying a sales trend, organizations gain a foundation for taking strategic actions to leverage this newfound insight. When it’s revealed that a specific product outperforms others within a particular demographic, this knowledge becomes a valuable asset. Organizations can then capitalize on this information by developing similar products or services tailored to the demographic or by optimizing the stocking strategy for the original product to cater to the identified consumer group. In the realm of data mining, classification techniques play a pivotal role by scrutinizing the diverse attributes linked to various types of data. By discerning the key characteristics inherent in these data types, organizations gain the ability to systematically categorize or classify related data. This process proves crucial in the identification of sensitive information


SolarWinds lawsuit by SEC puts CISOs in the hot seat

Without ongoing, open dialogue between these leaders, it’s impossible to guarantee complete awareness of the range of complications associated with potential cyber risks. Now that we’ve seen how these risks can easily extend beyond security concerns and into catastrophic financial and legal issues, it’s important that conversations about these risks are not taking place exclusively among CISOs. The roles and responsibilities of CISOs and other C-Suite executives vary dramatically, which can naturally result in siloed processes and priorities. However, to ensure alignment and effectively protect an organization from data breaches and legal recourse alike, it’s imperative that business leaders learn to “speak the same language” and share information to align their efforts and goals. CFOs and CISOs must collaborate to evaluate the relationships between cybersecurity incidents and legal risks. We can facilitate this by leveraging cyber risk quantification and management tools, which congregate data to calculate, quantify and translate information about threats and vulnerabilities into lay terms and easily digestible data.


CTO interview: Greg Lavender, Intel

“Our confidential computing capability is also a privacy-ensuring capability,” says Lavender. “Europe is ahead in this area, with the notion of sovereign clouds. Intel partners with some of the European governments on sovereign cloud using Intel’s platforms for confidential computing. The privacy-preserving capabilities are built into these platforms, which beyond government, will also be useful in regulated industries like financial services, healthcare and telcos.” “We also see a convergence in AI that will open up a big market for our privacy-ensuring software and hardware,” says Lavender. “You spend a lot of time prepping your data, tagging your data, getting your data ready for training, usage or inference usage. You want to do that securely in a multi-tenant environment. Our platforms give you the opportunity to do your training securely between the CPU and the GPU, and then you can deploy it securely in the cloud or at the edge.” “I’m talking with a lot of CIOs about this technology, because data is now such a valuable thing. It’s what you use to train your models. You don’t want somebody else to get access to that data because then they can use it to train their models and offer competing services.”



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - December 03, 2023

The need to upskill India’s tech talent is critical. Why? Because India has perhaps the most to gain or lose when it comes to the impact of Generative AI. A survey by ServiceNow found that India faces a critical need to upskill 1.62 crore workers in AI and automation, creating 4.7 million new jobs in technology by 2027 to meet the nation’s skill deficit. ... Now, the question is, can Generative AI help train such a large young population? Yes! This technology can create personalized learning paths. With modules integrated with AI to optimize outcomes, students can learn better with real-time feedback and take advantage of a more customized learning experience. If India has to become a global economic superpower, engineers must become tech-agnostic and adaptable in a world that’s changing fast. A Generative AI layer must be integrated into their education modules. This will equip them with cutting-edge skills and ensure versatility – from software developers to prompt engineers, enabling them to navigate diverse technological domains.


5 Ways To Demonstrate Leadership Skills In A Team Meeting

Don't just speak to be heard; speak to provide tangible solutions. Leaders are always thinking about innovative ways to drive forward positive business outcomes, and it's your responsibility in these meetings to think of creative solutions to the challenges your team is facing. Even if you don't have the complete solution yet, make some recommendations with a "What if we tried XYZ approach?" This engages other team members and shows that you are confident with sharing your ideas, while simultaneously ensuring everyone is included and feels heard—a mark of a true leader. ... One of the most difficult and embarrassing situations any professional could be placed in is to acknowledge when they've made a mistake or accidentally jeopardized the success of the team project. But it's the bravest thing to do, and it's an essential quality of a rising leader to take ownership for your mistakes. Refuse to cast blame on others or talk behind your colleagues' back, because this can destroy trust. Instead seek ways to rectify the situation and actively discuss solutions.


How can AI and advanced analytics streamline due diligence processes in financial industries

In an era of increasing digital transactions, customer due diligence (CDD) demands robust identity verification processes. AI brings biometric data, document analysis, and identity validation methods to the forefront, enhancing the accuracy and speed of customer due diligence. OCR, face match, liveness detection, match logic, and digital address verification facilitate contactless KYC and paperless onboarding. These technologies not only streamline onboarding processes but also contribute to a more secure and fraud-resistant financial ecosystem. Staying compliant with an ever-changing regulatory landscape is a perpetual challenge for financial institutions. AI provides a dynamic solution by automating the monitoring and adaptation to regulatory changes. Leveraging data analytics to best utilise and parse alternate data sources, such as utility bills, financial account data, etc., can help further track customer behaviour while empowering the team to identify discrepancies and stay compliant. From anti-money laundering (AML) to know-your-customer (KYC) tech, AI ensures that due diligence processes remain effective and consistently aligned with the latest regulatory standards. 


The World Depends on 60-Year-Old Code No One Knows Anymore

The problem is that very few people are interested in learning COBOL these days. Coding it is cumbersome, it reads like an English lesson (too much typing), the coding format is meticulous and inflexible, and it takes far longer to compile than its competitors. And since nobody's learning it anymore, programmers who can work with and maintain all that code are a increasingly hard to find. Many of these "COBOL cowboys" are aging out of the workforce, and replacements are in short supply. ... If it proves successful, the watsonx code assistant could have huge implications for the future, but not everyone is convinced it's a silver bullet that IBM says it is. Many who remember IBM’s previous AI experiment, Watson Health, are hesitant to trust another big AI project from the company because the previous one failed so miserably and didn't deliver on its high-flying promises. Gartner Distinguished Vice President and Analyst, Arun Chandrasekara is also skeptical because “IBM has no case studies, at this time, to validate its claims,” he says. 


Tech Works: How to Build a Life-Long Career in Engineering

As Hightower put it, “You get to move as fast as you’re willing to believe that you can. You identify a problem and you execute it.” Aim to be agile in mindset and practice as long as you can, both with your organization and with your own career. Nothing is precious. Always look for opportunities to learn. If you get stuck with one language or framework, it limits where you can move and also your ability to change. It may even have you wasting time rebuilding things in your framework of choice — like Hightower acknowledged he used to do with Python. ... IT is a massive cost center that often demands an explanation from an organization’s budget makers, especially with a recession looming. An underappreciated benefit of platform engineering, for instance, is that it can enable a conversation between the tech side and the business side. Developers and engineers benefit from this conversation. They feel a deeper sense of purpose when their work is more closely connected to business goals. That means, especially in a time of increased automation, storytelling is of great value. To act as a translator and context giver can help boost an engineer’s value.


Bridging the gap between cloud vs on-premise security

Cloud-native security architectures like SASE and SSE can offer the east-west protection typically delivered by a data center firewall by rerouting all internal traffic through the closest point of presence (PoP). Unlike a local firewall that comes with its own configuration and management constraints, firewall policies configured in the SSE PoP can be managed via the platform’s centralized management console. ... As security functions move increasingly to the cloud, it’s crucial not to lose sight of the controls and security measures needed on-site. Cloud-native protections aim to increase coverage while reducing complexities and boosting convergence. As critical as it is to enable east-west traffic protection within SASE and SSE architectures, it’s equally important to maintain the unified visibility, control, and management offered by such platforms. To achieve this, organizations must avoid getting carried away by emerging threats and adding back disparate security solutions. 


5 tweaks every developer should make in Windows 11

The new Windows Terminal is nothing short of fantastic. It's a night-and-day improvement that allows you to run Powershell, cmd, and WSL sessions within one window. It's customizable, has great tab support, and is even open-source. It's got a similar JSON configuration for settings as VSCode, which is well worth exploring, and the inbuilt GUI menus allow you to set your default shell among a range of other things. The new terminal also supports side-by-side windows or split panes, and background opacity settings. ... While Microsoft has made great strides in recent years trying to win back developers, the Windows file system has often been a pain point. Developers have long been used to a Linux/Unix file system, where managing and creating thousands of small files for dependencies is of trivial impact on the overall system performance, and many common tools have been built with this in mind. NTFS has already been known to suffer from a performance gap with the defacto Linux standard ext4, and Windows Defender's real-time protection can slow this down even further. 


Data Observability: Reliability in the AI Era

Data observability is characterized by the ability to accelerate root cause analysis across data, system, and code and to proactively set data health SLAs across the organization, domain, and data product levels. ... Data engineers are going to be building more pipelines faster (thanks Gen AI!) and tech debt is going to be accumulating right alongside it. That means degraded query, DAG, and dbt model performance. Slow running data pipelines cost more, are less reliable, and deliver poor data consumer experience. That won’t cut it in the AI era when data is needed as soon as possible. Especially not when the economy is forcing everyone to take a judicious approach with expense. That means pipelines need to be optimized and monitored for performance. Data observability has to cater for it. ... This will shock no one who has been in the data engineering or machine learning space for the last few years, but LLMs perform better in areas where the data is well-defined, structured, and accurate. Not to mention, there are few enterprise problems to be solved that don’t require at least some context of the enterprise. 


Top 9 Cybersecurity Trends in 2024

Looking to the future of cybersecurity, companies will need to implement new cyber defenses to combat info stealer malware, he adds. Organizations should seek comprehensive malware remediation strategies to neutralize the stolen data before it’s used for other cyber incidents. “Session cookies, passwords, and APIs can remain active for weeks or months after they were initially stolen, leaving organizations vulnerable to follow-up or repeat attacks using the same data,” according to Hilligoss. “A holistic post-infection remediation plan that includes monitoring the dark web for malware-stolen data allows enterprises to invalidate any compromised sessions and patch vulnerabilities before criminals use the information to cause harm.” ... In addition, the SEC charges against the SolarWinds chief information security officer (CISO) will change that role in 2024, according to Thomas Kinsella, co-founder and chief customer officer at Tines, a security workflow automation company. The SEC’s decision means more cybersecurity issues will escalate to boardroom issues as CISOs force the entire company to accept the risk rather than shouldering it alone.


Turmoil at OpenAI shows we must address whether AI developers can regulate themselves

In the background, there have been reports of vigorous debates within OpenAI regarding AI safety. This not only highlights the complexities of managing a cutting-edge tech company, but also serves as a microcosm for broader debates surrounding the regulation and safe development of AI technologies. Large language models (LLMs) are at the heart of these discussions. LLMs, the technology behind AI chatbots such as ChatGPT, are exposed to vast sets of data that help them improve what they do – a process called training. However, the double-edged nature of this training process raises critical questions about fairness, privacy, and the potential misuse of AI. Training data reflects both the richness and biases of the information available. The biases may reflect unjust social concepts and lead to serious discrimination, the marginalising of vulnerable groups, or the incitement of hatred or violence. Training datasets can be influenced by historical biases. 



Quote for the day:

“The road to success and the road to failure are almost exactly the same.” -- Colin R. Davis

Daily Tech Digest - December 02, 2023

Infrastructure as Code and Security – Five Ways to Improve Your Approach

Implementing IaC security processes will affect how teams work. For security, it is another set of images that have to be tracked for potential vulnerabilities, and changes flagged for remediation. For developers, these changes can be another set of work alongside requests from the business and other fixes that are needed. However, this can easily become a problem for developers. Having to learn another set of tools to track issues or find the list of problems will affect how developers work, making it harder to get issues fixed. To solve this problem, security can integrate into developer workflows and the tools that they use every day. Developers can automate security scans using APIs from within their developer environments and integrate with the code editors, Git repositories, and CI/CD tools to provide early visibility. These results can then be fed into the developer workflow, flagging potential issues that need to be fixed alongside other requests for work. Rather than being a separate stream of work that developers have to consciously engage with, security fixes to IaC should be treated just the same as other tasks.


Deconstructing Telemetry Pipelines: Streamlining Data Management For The Future

The primary objectives of telemetry pipelines are to reduce data clutter, add context and save resources. Good data pipelines build multiple views into the pipeline, organizing and contextualizing data from the get-go. By organizing and labeling data, these pipelines make it easier to extract valuable information from a single source of truth rather than combing through scattered data puddles. Contextualization is the key. It involves tagging data with labels, making it easier to group, filter and analyze as it moves through the system. ... One of the significant issues that telemetry pipelines address is the growing complexity of data management and the challenges posed by tool sprawl. As more data sources are added to an organization's infrastructure and tools multiply, the complexity becomes exponential. Managing this complexity, validating assumptions and keeping data silos in check can become a significant burden. When you’re not indexing correctly, you’re just paying for extra toil. Telemetry data can come from various sources, not just machine data. This could include things like sentiment analysis or even insights into how often somebody uses particular buttons in their smart car. 


8 change management questions every IT leader must answer

In the digital transformation era, IT success is no longer defined by meeting a go-live date or keeping within a budget. It is determined by the creation of shared vision and goals; achievement of leadership engagement and alignment; broad buy-in and adoption of new systems, platforms, and processes; and realization of business outcomes. ... CIOs must view change management as a kind of GPS for transformation initiatives, designed to keep them on track from the get-go. “If the change rationale isn’t clearly established and communicated at the start, the whole initiative will be an uphill battle,” says Jeanine L. Charlton, senior vice president and chief technology and digital officer at Merchant’s Fleet, where she has been leading the charge to rethink the way the 60-year-old fleet management firm operates. As IT leaders ask their organizations to think differently about the way they do things and adopt radically different alternatives, they must also rethink their own approach to ushering in these changes. 


Thinking in Systems: A Sociotechnical Approach to DevOps

The late philosopher Bernard Stiegler argued that technology is constitutive of human cognition — that is, our use of tools and technology fundamentally shapes our minds and our understanding of the world. That means that adopting better tools improves the ways we think and work. Specifically, tools that map the dizzying array of inputs and outputs within the organization help us to reason through value chains and where we fit within it. This new breed of tools uses directed acyclic graphs to capture your software infrastructure, microservices, tests and jobs. Imagine you are a software developer in a big organization that has thousands of developers. Your team owns an API that controls the flow of widgets through time and space. Your API is used by dozens of other teams, but you lack a sense of greater value, of place in the overall system. How does your API result in business value? What are its upstream and downstream dependencies? If your API were to disappear tomorrow, what consequence would it have on the business? Tools like Garden capture a map of value for software so teams like yours can, at any time, view their part of the whole. 


The evolution of multitenancy for cloud computing

The evolution of multitenancy in public cloud computing services will be driven by advancements in container orchestration, edge computing, and artificial intelligence. These technologies will further enhance the capabilities of multitenant environments, such as using an edge system to allocate some of the processing that is tasked to a multitenant architecture, or leveraging AI to direct allocation. This will be much better than the simple algorithms most are using today. As the complexity of client requirements grows, public cloud providers will continue to invest in refining multitenancy approaches. I suspect this will mean focusing on workload isolation, data governance, and compliance management within shared infrastructures. Also, the convergence of multitenancy with hybrid and multicloud architectures will soon be a thing, even though multicloud is already common. The idea will be to offer seamless integration and interoperability across cloud environments, supporting the notion of heterogeneity at the multitenant level and not at the application and data levels, which is how things are done now.


Top 10 Software Architecture Patterns to Follow in 2024

Serverless architecture reduces the requirement for managing servers. It allows developers to focus solely on writing code while cloud providers manage the infrastructure. Serverless functions are event-driven and executed in response to specific events or triggers. Serverless architectures offer automatic scaling, reduced operational overhead, and cost efficiency. Developers can focus on writing code without worrying about server provisioning or maintenance. ... Event Sourcing is a pattern where the state of an application is determined by a sequence of events rather than the current state. Events are stored, and the application's state can be reconstructed by replaying these events. Event Sourcing provides a full audit trail of changes, enables temporal queries, and supports advanced analytics. It is useful in scenarios where historical data tracking is crucial. ... Event-driven architecture is centered around the concept of events, where components communicate by producing and consuming events. Events represent meaningful occurrences within a system and can trigger actions in other parts of the application.


Data Management and Consolidation in the Integration of Corporate Information Systems

In the realm of corporate information systems, integration serves a crucial role in improving how we handle and oversee data. This process involves merging data from diverse sources into a single, coherent system, ensuring that all users have access to the same, up-to-date information. The end goal is to maintain data that is both accurate and consistent, which is essential for making informed decisions. This task, known as data management and consolidation, is not just about bringing data together; it's about ensuring the data is reliable, readily available, and structured in a way that supports the company's operations and strategic objectives. By consolidating data, we aim to eradicate inconsistencies and redundancies, which not only enhances the integrity of the data but also streamlines workflows and analytics. It lays the groundwork for advanced data utilization techniques such as predictive analytics, machine learning models, and real-time decision-making support systems. Effective data management and consolidation require the implementation of robust ETL processes, middleware solutions like ESBs, and modern data platforms that support Event-driven architectures.


Decoding The Taj Hotels’ Data Breach And India’s Growing Cybersecurity Battle

In simple terms, a social engineering attack uses psychological manipulation to get access to sensitive data via human interactions. “Hackers research LinkedIn and launch targeted attacks. They gather information about employees and third-party contractors connected with the target organisation and send phishing emails. All they need is an unsuspecting employee or a contractor clicking the link. Then, a variety of innovative social engineering actions follow, leading to APTs. The hackers end up harvesting credentials to gain access to systems and applications,” Venkatramani explained. In the hospitality sector, cyberattacks are predominantly fuelled by a lack of password security hygiene, which encompasses issues such as inadequate credential management, widespread password reuse across various IT assets, insufficient controls on access authorisation, insecure sharing methods like phone calls, neglect of embedded credentials in development environments, and the disregard for essential practices like robust password creation and regular rotation, Venkatramani added.


Website spoofing: risks, threats, and mitigation strategies for CIOs

Protecting the website and preventing users from falling prey to website spoofing scams requires a multilayered approach whereby various methods and procedures must be employed. Any points of vulnerability on the website must be identified. The organization’s employees must be educated, raising their awareness of scams like phishing attacks and brand impersonation so they remain vigilant about potential attacks. In addition, the most effective way of identifying and preventing spoofing attacks is by adopting the right solution. Compliance, software updates, resolving issues, customer support, and various other concerns will be handled as a third-party service provides these services. ... In a world where technological progress can be exploited for malicious purposes, safeguarding data emerges as the paramount goal for any organization. With the right defense methods and tools, businesses can confidently navigate the digital landscape, conducting day-to-day operations without the looming fear of falling victim to the clandestine.


2024: The Year of Wicked Problems

If humans are displaced by the implementation of advanced analytics and machine learning, which can provide equal or superior productivity, and humans cannot evolve quickly enough (or at all), is there a societal social net to fall back on? If the productivity gains centralize wealth at the top of the economic landscape, does this result in mass unemployment and extreme wealth inequality? How will this impact developed nations and developing nations? These are the wicked problems that governments around the world will have to tackle by identifying creative solutions that will have a positive impact on society. In addition, will the extreme polarization of political thinking further enhanced by advanced algorithmic information sharing and prioritization preclude us from coming together in unity to address these challenges in a timely manner? When addressing wicked problems associated with society, there are often archaic laws and regulations that hinder our ability to move fast and efficiently in the iterative design thinking approach. Can these be changed rapidly enough to enable the iterative approach to problem-solving that will allow us to address these wicked problems?



Quote for the day:

"The road to success and the road to failure are almost exactly the same." -- Colin R. Davis

Daily Tech Digest - November 30, 2023

Super apps: the next big thing for enterprise IT?

Enterprise super apps will allow employers to bundle the apps employees use under one umbrella, he said. This will create efficiency and convenience, where different departments can select only the apps they want, much like a marketplace, to customize their working experiences. Other advantages of super apps for enterprises include providing a more consistent user experience, combating app fatigue and app sprawl, and enhancing security by consolidating functions into one company-managed app. Gartner analyst Jason Wong said the analyst firm is seeing interest in super apps from organizations, including big box stores and other retailers, that have a lot of frontline workers who rely on their mobile devices to do their jobs. One company that has adopted a super app to enhance the experience of its frontline workers and other employees is TeamHealth, a leading physician practice in the US. TeamHealth is using an employee super app from MangoApps, which unifies all the tools and resources employees use daily within one central app.


Meta faces GDPR complaint over processing personal data without 'free consent'

The case centres on whether Meta can legitimately claim to have obtained free consent from its customers to process their data, as required under GDPR, when the only alternative is for customers to pay a substantial fee to opt out of ad-tracking. The complaint will be watched closely by social media companies such as TikTok, which are reported to be considering offering ad-free services to customers outside the US to meet the requirements of European data protection law. Meta denied that it was in breach of European data protection law, citing a European Court of Justice ruling in July 2023 which it said expressly recognised that a subscription model was a valid form of consent for an ad-funded service. Spokesman Matt Pollard referred to a blog post announcing Meta’s subscription model, which stated, “The option for people to purchase a subscription for no ads balances the requirements of European regulators while giving users choice and allowing Meta to continue serving all people in the EU, EEA and Switzerland”.


India’s Path to Cyber Resilience Through DevSecOps

DevSecOps, a collaborative methodology between development, security, and operations, places a strong emphasis on integrating security practices into the software development and deployment processes. In India, the approach has gained substantial traction due to several reasons, including a security-first mindset, adherence to compliance requirements and escalating cybersecurity threats. A survey revealed that the primary business driver for DevSecOps adoption is a keen focus on business agility, achieved through the rapid and frequent delivery of application capabilities, as reported by 59 per cent of the respondents. From a technological perspective, the most significant factor is the enhanced management of cybersecurity threats and challenges, a factor highlighted by 57 per cent of the participants. Businesses now understand the importance of proactive security measures. DevSecOps encourages a security-first mentality, ensuring that security is an integral part of the development process from the outset.


Cybersecurity and Burnout: The Cybersecurity Professional's Silent Enemy

In the world of cybersecurity, where digital threats are a constant, the mental health of professionals is an invaluable asset. Mindfulness not only emerges as a shield against the stress and burnout that pose security risks to organizations, but it also becomes a key strategy to reduce the costs associated with lost productivity and staff turnover. By adopting mindfulness practices and preventing burnout, cybersecurity professionals not only preserve their well-being, but also contribute to a healthier work environment, improve the responsiveness and effectiveness of cybersecurity teams, and ensure the continued success of companies in this critical technology field. Cybersecurity challenges are multidimensional. They cannot be managed in only one dimension. Mindfulness is an essential tool to keep us one step ahead. By recognizing the value of emotional well-being in the fight against cyberattacks, we can build a stronger and more sustainable defense. Cybersecurity is not only a technical issue, but also a human one, and mindfulness presents itself as a key piece in this intricate security puzzle.


Will AI replace Software Engineers?

While AI is automating some tasks previously done by devs, it’s not likely to lead to widespread job losses. In fact, AI is creating new job opportunities for software engineers with the skills and expertise to work with AI. According to a 2022 report by the McKinsey Global Institute, AI is expected to create 9 million new jobs in the United States by 2030. The jobs that are most likely to be lost to AI are those that are routine and repetitive, such as data entry and coding. However, software engineers with the skills to work with AI will be in high demand. ... Embrace AI as a tool to enhance your skills and productivity as a software engineer. While there's concern about AI replacing software engineers, it's unlikely to replace high-value developers who work on complex and innovative software. To avoid being replaced by AI, focus on building sophisticated and creative solutions. Stay up-to-date with the latest AI and software engineering developments, as this field is constantly evolving. Adapt to the changing landscape by acquiring new skills and techniques. Remember that AI and software engineering can collaborate effectively, as AI complements human skills. 


Bridging the risk exposure gap with strategies for internal auditors

Without a strategic view of the future — including a clear-eyed assessment of strengths, weaknesses, opportunities, threats, priorities, and areas of leakage — internal audit is unlikely to recognize actions needed to enable success. There is no bigger threat to organizational success than a misalignment between exponentially increasing risks and a failure to respond due to a lack of vision, resources, or initiative. Create and maintain a good, well-documented strategic plan for your internal audit function. This can help you organize your thinking, force discipline in definitions, facilitate implementation, and continue asking the right questions. Nobody knows for certain what lies ahead, and a well-developed strategic plan is a key tool for preparing for chaos and ambiguity. ... Companies may have less time than they think to prepare for compliance, and internal auditors should be supporting their organizations in getting the right enabling processes and technologies in place as soon as possible. This will require a continuing focus on breaking down silos and improving how internal audit collaborates with its risk and compliance colleagues. 


Generative AI in the Age of Zero-Trust

Enter generative AI. Generative AI models generate content, predictions, and solutions based on vast amounts of available data. They’re making waves not just for their ‘wow’ factor, but for their practical applications. It’s only natural that employees would gravitate to the latest technology offering the ability to make them more efficient. For cybersecurity, this means potential tools that offer predictive threat analysis based on patterns, provide automatic code fixes, dynamically adjust policies in response to evolving threat landscapes and even automatically respond to active attacks. If used correctly, generative AI can shoulder some of the burdens of the complexities that have built up over the course of the zero-trust era. But how can you trust generative AI if you are not in control of the data that trains it? You can’t, really. ... This is forcing organizations to start setting generative AI policies. Those that choose the zero-trust path and ban its use will only repeat the mistakes of the past. Employees will find ways around bans if it means getting their job done more efficiently. Those who harness it will make a calculated tradeoff between control and productivity that will keep them competitive in their respective markets.


Organizations Must Embrace Dynamic Honeypots to Outpace Attackers

There are a number of ways in which AI-powered honeypots are superior to their static counterparts. The first is that because they can independently evolve, they can become far more convincing through automatic evolution. This sidesteps the problem of constantly making manual adjustments to present the honeypot as a realistic facsimile. Secondly, as the AI learns and develops, it will become far more adept at planting traps for unwary attackers, meaning that hackers will not only have to go slower than usual to try and avoid said traps but once one is triggered, it will likely provide far richer data to defense teams about what attackers are clicking on, the information they’re after, how they’re moving across the site. Finally, using AI tools to design honeypots means that, under the right circumstances, even tangible assets can be turned into honeypots. ... Therefore, having tangible assets such as honeypots allows defense teams to target their energy more efficiently and enables the AI to learn faster, as there will likely be more attackers coming after a real asset than a fake one.


Almost all developers are using AI despite security concerns, survey suggests

Many developers place far too much trust in the security of code suggestions from generative AI, the report noted, despite clear evidence that these systems consistently make insecure suggestions. “The way that code is generated by generative AI coding systems like Copilot and others feels like magic," Maple said. "When code just appears and functionally works, people believe too much in the smoke and mirrors and magic because it appears so good.” Developers can also value machine output over their own talents, he continued. "There’s almost an imposter syndrome," he said. ... Because AI coding systems use reinforcement learning algorithms to improve and tune results when users accept insecure open-source components embedded in suggestions, the AI systems are more likely to label those components as secure even if this is not the case, it continued. This risks the creation of a feedback loop where developers accept insecure open-source suggestions from AI tools and then those suggestions are not scanned, poisoning not only their organization’s application code base but the recommendation systems for the AI systems themselves, it explained.


Former Uber CISO Speaks Out, After 6 Years, on Data Breach, SolarWinds

Sullivan says the key mistake he made was not bringing in third-party investigators and counsel to review how his team handled the breach. "The thing we didn't do was insist that we bring in a third party to validate all of the decisions that were made," he says. "I hate to say it, but it's more CYA." Now, Sullivan advises other CISOs and companies about navigating their responsibilities in disclosing breaches, especially as the new Securities & Exchange Commission (SEC) incident reporting requirements are set to take effect. Sullivan says he welcomes the new regulations. "I think anything that pushes towards more transparency is a good thing," he says. He recalls that when he was on former President Barack Obama's Commission on Enhancing National Cybersecurity, Sullivan was pushing to give companies immunity if they are transparent early on during security incidents. That hasn't happened until now, according to Sullivan, who says the jury is still out on the new regulations, which will require action starting in December.



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein