Showing posts with label regtech. Show all posts
Showing posts with label regtech. Show all posts

Daily Tech Digest - November 05, 2025


Quote for the day:

"Effective leaders know that resources are never the problem; it's always a matter of resourfulness." -- Tony Robbins



AI web browsers are cool, helpful, and utterly untrustworthy

AI browsers can and do interact with everything on a web page: summarizing content, reading emails, composing posts, looking at images, etc., etc. Every element on the page, whether you can see it or not, can hide an attack. A hacker can embed clipboard manipulations or other hacks that traditional browsers would never, not ever, execute automatically. ... AI browser agents can be tricked by hidden instructions embedded in websites via invisible text, images, scripts, or, believe it or not, bad grammar. Your eyes might glaze over at a long run-on sentence, but your AI web browser will read it all, including instructions for an attack hidden in plain sight within it. Such malicious commands are read and executed by the AI. This can lead to exposure of sensitive data, such as emails, authentication tokens, and login details, or triggering unwanted actions, including sending emails, posting to social media, or giving your computer a bad case of malware. ... Privacy is pretty much lost these days anyway, but with AI web browsers, we’ll have all the privacy of a goldfish in a bowl. Since AI browsers monitor our every last move, they process much more granular personal information than conventional browsers. Worrying about cookies and privacy is so 1990s. AI browsers track everything. This is then used to create highly detailed behavioral profiles. What? You didn’t know that AI browsers have built-in memory functions that retain your interactions, browser history, and content from other apps? How do you think they do what they do? Intuition? ESP?


AI can flag the risk, but only humans can close the loop

Companies embedding AI into vendor risk processes need governance structures that ensure transparency, accountability, and compliance. This includes maintaining an approved sources catalogue and requiring either the system or an analyst to validate findings and document the rationale behind them. Data minimization should be built into the design by defining what information is always in scope, such as sanctions or embargo lists, and what is contextually relevant, while excluding protected or sensitive attributes under GDPR and configuring AI to ignore them. Risk assessments should be tiered, calibrating the depth of checks to supplier criticality and geography to avoid unnecessary data collection for low-risk relationships while expanding scope for high-risk scenarios. Human accountability remains essential, with a named individual owning due diligence decisions while AI provides recommendations without replacing human judgment ... Regulators are likely to allow AI use if firms establish strong controls and demonstrate effective oversight, as required by frameworks like the EU AI Act. Responsibility remains with individuals or organizations; liability does not transfer to AI itself. While regulators may struggle to specify detailed technical rules, one clear shift is that “the data volume was too large to review” will no longer be an acceptable defense.


10 top devops practices no one is talking about

“A key, yet overlooked, devops practice is building true shared ownership, which means more than just putting teams in the same chat room,” says Chris Hendrich, associate CTO of AppMod at SADA. “It requires making production reliability and performance a primary success indicator for development, not solely an operational concern. This shared accountability is what builds the organizational competency of creating better, more resilient products.” ... “Baking an integrated code quality and code security approach into your devops workflow isn’t just good practice, it’s essential and a game-changer,” says Donald Fischer, VP at Sonar. “Tackling security alongside quality from day one isn’t merely about early bug detection; it’s about building fundamentally stronger, more trustworthy, and resilient software that is secure by design.” ... “Open source is a no-brainer for developers, but as the ecosystem grows, so do the risks of malware, unsafe AI models, license issues, outdated packages, poor performance, and missing features,” says Mitchell Johnson, CPDO of Sonatype. “Modern devops teams need visibility into what’s getting pulled in, not just to stay secure and compliant, but to make sure they’re building with high-quality components.” ... “Version-controlling database schemas and configurations across development, QA, and production is a quietly powerful devops practice,” says McMillan. 


Cloud Identity Exposure Is 'a Critical Point of Failure'

Attackers keep targeting cloud-based identities to help them bypass endpoint and network defenses, says an August report from cybersecurity firm CrowdStrike. That report counts a 136% increase in cloud intrusions over the preceding 12 months, plus a 40% year-on-year increase in cloud intrusions tied to threat actors likely working for the Chinese government. "The cloud is a priority target for both criminals and nation-state threat actors," said Adam Meyers, head of counter adversary operations at CrowdStrike ... One challenge is that enough cloud identities justify elevated permissions, putting organizations at elevated risk when their credentials are exposed. Take security operations centers and incident response teams. In general, while "the principle of least privilege and minimal manual access" is a best practice, first responders often need immediate and "necessary access," says an August report from Darktrace. "Security teams need access to logs, snapshots and configuration data to understand how an attack unfolded, but giving blanket access opens the door to insider threats, misconfigurations and lateral movement." Rather than always allowing such access, experts recommend using tools that only provide it when needed, for example, through Amazon Web Services' Security Token Service. "Leveraging temporary credentials, such as AWS STS tokens, allows for just-in-time access during an investigation" that can be automatically revoked after, which "reduces the window of opportunity for potential attackers to exploit elevated permissions," Darktrace said.


How Software Development Teams Can Securely and Ethically Deploy AI Tools

Clearly, there is a danger that teams will trust AI too much, as these tools lack a command of the often nuanced context to recognize complex vulnerabilities. They may not fully grasp an application’s authentication or authorization framework, potentially leading to the omission of critical checks. If developers reach a state of complacency in their vigilance, the potential for such risks will only increase. ... Beyond security, team leaders and members must focus more on ethical and even legal considerations: Nearly one-half of software engineers are facing legal, compliance and ethical challenges in deploying AI, according the The AI Impact Report 2025 from LeadDev. The ethical/legal scenarios can take on a highly perplexing nature: A human engineer can read, learn from and write original code from an open-source library. But if an LLM does the same thing, it can be accused of engaging in derivative practices. What’s more, the current legal picture is a murky work in progress. Given the still-evolving judicial conclusions and guidelines, those using third-party AI tools need to ensure they are properly indemnified from potential copyright infringement liability, according to Ropes & Gray, a global law firm that advises clients on intellectual property and data matters. “Risk allocation in contracts concerning or contemplating AI models should be approached very carefully,” according to the firm.


How AI is Revolutionising RegTech and Compliance

Traditional approaches are failing, overwhelmed by increasing regulatory complexity and cross-border requirements. Enter RegTech: a technological revolution transforming how institutions manage regulatory obligations. Advanced artificial intelligence systems now predict compliance breaches weeks before they occur, while blockchain platforms create tamper-proof audit trails that streamline regulatory examinations. ... Natural language processing interprets complex regulatory documents automatically, updating compliance procedures within minutes of regulatory changes. Smart contracts execute compliance actions without human intervention, ensuring consistent adherence to evolving requirements. Leading institutions are achieving remarkable results. Barclays reduced regulatory document processing time from days to minutes using AI-powered analysis. JPMorgan's blockchain settlement system maintains compliance across multiple jurisdictions simultaneously. ... Regulatory-as-a-Service models are democratising access to sophisticated compliance capabilities. Smaller institutions can now access enterprise-grade RegTech through subscription services, reducing compliance costs by up to 50% whilst improving regulatory coverage. Challenges remain significant. Data privacy concerns intensify as compliance systems process vast quantities of sensitive information. Regulatory fragmentation across jurisdictions complicates platform development. 


CEOs Go All-In on AI, But Talent Isn't Ready

Despite the enthusiasm for AI, workforce readiness is still a critical concern. Approximately 74% of Indian CEOs see AI talent readiness as a determinant of their company's future success, yet 34% admit to a widening skills gap. This talent gap is multifaceted; it's not only technical proficiency that's in short supply, but also expertise in blending data science with ethics, regulatory understanding and business acumen. About 26% struggle to find candidates who balance technical skill with collaboration capabilities. ... Regulatory uncertainty still weighs heavily on CEOs' minds, with nearly half of Indian CEOs awaiting clearer regulatory guidance before pushing bold innovation initiatives, compared to only 39% globally. This cautious stance underlines a pragmatic approach to integrating AI amid evolving governance landscapes. About 76% of Indian CEOs worry that slow AI regulation progress could hinder organizational success. Ethical concerns also loom large: 62% of Indian CEOs cite them as significant barriers, slightly higher than the 59% global average, underscoring the importance of embedding trust and governance frameworks alongside technological investments. "This is why culture and leadership are very important. The board of directors must have a degree of AI literacy. There must be psychological safety in the organization. Employees must feel safe and if there's clear governance, it means there is a proactive suggestion to use sanctioned AI that meets security requirements," John Barker


Powering financial services innovation: The critical role of colocation

As AI continues to evolve, its impact on financial services is becoming both broader and deeper – moving beyond high-level innovation into the operational core of the enterprise. Today’s financial institutions face a dual mandate: to accelerate AI adoption in pursuit of competitive advantage, and to do so within the constraints of an increasingly complex digital and regulatory environment. From risk modelling and fraud prevention to real-time analytics and customer personalization, AI is being embedded into mission-critical functions. Realising its full potential, however, isn't solely a matter of algorithms – it hinges on having a data-first strategy, with the right infrastructure and governance in place. ... With exponential data growth presenting challenges, customers gain access to a secure, compliant, resilient, and performant foundation. This foundation enables the implementation of new technologies and seamless orchestration of data flows. Our goal is to simplify data management complexity and serve as the single, trusted, global data center partner for our customers. As organizations optimize their AI strategies, many are exploring cloud repatriation – the process of moving certain workloads from the cloud back to on-premises or colocation environments. This strategic move can be crucial for AI success, as it allows for better control over sensitive data, reduced latency, and improved performance for demanding AI workloads.


Measuring, Reporting, and Improving: Making Resilience Tangible and Accountable

A continuity plan sitting on a shelf provides little assurance of resilience. What matters is whether organizations can demonstrate their strategies work, they are tested, and corrective actions are tracked. Measurement transforms resilience from an abstract concept into quantifiable performance. ... Metrics ensure resilience is not left to chance or anecdote. They provide boards and regulators with evidence of progress, reinforcing accountability at the executive and governance levels. A resilience strategy that cannot be measured cannot be trusted. ... The first step in strengthening measurement is to define resilience key performance indicators (KPIs) and key risk indicators (KRIs). These metrics should evaluate outcomes rather than simply tracking activities, ensuring performance reflects actual readiness. ... Measurement alone is not enough without transparency. Organizations must establish reporting practices that make resilience performance visible to boards, regulators, and, when appropriate, customers. Sharing outcomes openly not only demonstrates accountability but also builds trust and credibility. ... One challenge organizations often encounter when measuring resilience is metric overload. In the effort to capture every detail, leaders may track too many indicators, creating complexity that dilutes focus and makes it difficult to interpret results. 


Bridging the Gap: Why DevOps Teams Are Quietly Becoming the Front Line of Security

For experienced DevOps practitioners, the idea of shifting security left isn't new. Static analysis in CI/CD pipelines, dependency scanning, and Infrastructure as Code (IaC) validation have become the norm. What's changed more recently is the pressure to respond to security events operationally, in addition to preventing them during builds. DevOps teams are adjusting in very real ways. Many are building security context into their logging practices, ensuring that logs are structured for debugging, and also for investigation and audit. Others are automating triage for security alerts using the same mindset they've applied to performance monitoring and deployment pipelines. Perhaps most importantly, DevOps teams are often the first to respond when something unusual shows up in system logs or access patterns. ... Security can be a shared responsibility across teams as long as boundaries and expectations are set. DevOps teams are defining their role in security more clearly by, for example, determining what gets logged, what counts as an anomaly, and who owns the investigation. They're also setting expectations around incident escalation, CVE response timeframes, and compliance requirements. When these lines are clear, security becomes an integrated part of the workflow instead of an extra burden. ... For many DevOps teams, security is part of the daily reality. It comes as a series of small, increasingly frequent interruptions.

Daily Tech Digest - July 30, 2024

Cyber security and Compliance: The Convergence of Regtech Solutions

While cybersecurity, in itself, is an area that requires significant resources to ensure compliance, a business organisation needs to deal with numerous other regulations. The business regulatory ecosystem is made up of over 1,500 acts and rules and more than 69,000 compliances. As such, each enterprise needs to figure out the regulatory requirements applicable to their business. The complexity of the compliance framework is such that businesses are often lagging behind their compliance timelines. Take, for instance, a single-entity MSME with a single-state operation involved in manufacturing automotive components. Even such an operation requires the employer to keep up with 624 unique compliances. These requirements can reach close to 1,000 for a pharmaceutical enterprise. Persisting with manual compliance methods while technology has taken over every other business operation has become the root cause of delays, lapses, and defaults. While businesses are investing in the best possible technological solutions for cybersecurity issues, they are disregarding the impact of technology on their compliance functions.


Millions of Websites Susceptible to XSS Attack via OAuth Implementation Flaw

Essentially, the ‘attack’ requires only a crafted link to Google (mimicking a HotJar social login attempt but requesting a ‘code token’ rather than simple ‘code’ response to prevent HotJar consuming the once-only code); and a social engineering method to persuade the victim to click the link and start the attack (with the code being delivered to the attacker). This is the basis of the attack: a false link (but it’s one that appears legitimate), persuading the victim to click the link, and receipt of an actionable log-in code. “Once the attacker has a victim’s code, they can start a new login flow in HotJar but replace their code with the victim code – leading to a full account takeover,” reports Salt Labs. The vulnerability is not in OAuth, but in the way in which OAuth is implemented by many websites. Fully secure implementation requires extra effort that most websites simply don’t realize and enact, or simply don’t have the in-house skills to do so. From its own investigations, Salt Labs believes that there are likely millions of vulnerable websites around the world. The scale is too great for the firm to investigate and notify everyone individually. 


How to Build a High-Performance Analytics Team

The first approach, which he called the “artisan model,” involves building a small team of highly experienced (and highly paid) data scientists. Such skilled and capable team members can generally tackle all aspects of solving a business problem, from subject matter expert engagement to hypothesis testing, production, and iteration. The “factory approach,” on the other hand, resembles more of an assembly line, with a large group of people divvying up tasks based on their areas of expertise: some working on the business problem definition, others handling data acquisition, and so on. This second approach requires hiring more people than the first approach, but the pay differential between the two types of team members is significant enough that the two approaches cost roughly the same. ... An analytics team needs to grow and evolve to survive, and management must treat its staff accordingly. “Data scientists are some of the most sought-after talent in the economy right now,” Thompson stressed, “So I’m working every day to make sure that my team is happy and that they’re getting work they’re interested in ­– that they’re being paid well and treated well.”


Securing remote access to mission-critical OT assets

The two biggest challenges around securing remote access to mission-critical OT assets are different depending on whether it’s a user or machine that needs to connect to the OT asset. In terms of user access, the fundamental challenge is that the cyber security team doesn’t know what the assets are, and who the users are. That’s where the knowledge of the OT engineers – coupled with an inventory of the assets comes into play. The security team can leverage the inventory, experience, and knowledge of the OT engineers to operate as the “first line of defense” to stand up the organizational defenses. With respect to machine-to-machines access organizations typically don’t have an understanding of what “known good” traffic should look like between these assets. Without this understanding knowledge, it’s impossible to spot the anomalies from the baseline. That’s where a good cyber-physical system protection platform comes into play, providing the ability to understand the typical communication patterns that can eventually be operationalized in network segmentation rules to ensure effective security.


CrowdStrike debacle underscores importance of having a plan

To CrowdStrike’s credit, as well as its many partners and the CISO/InfoSec community at large, a lot of oil was burned in the initial days after the faulty update was transmitted as the community collectively jumped in and lent a hand to mitigate the situation. ... “Moving forward, this outage demonstrates that continuous preparation to fortify defenses is vital, especially before outages occur,” Christine Gadsby, CISO at Blackberry, opined. She continued, “Already understanding what areas are most vulnerable within a system prevents a panicked reaction when something looks amiss and makes it more difficult for hackers to wreak havoc. In a crisis, defense is the best offense; the value of confidence that comes with preparation cannot be underestimated.” ... CISOs should also review what needs to be changed, included, or deleted from their emergency response and business continuity playbooks. ... Now is the time for each CISO to do a bit of introspection on their team’s ability to address a similar scenario, and plan, exercise, and be prepared for the unexpected. Which could happen today, tomorrow, or hopefully never.


How Searchable Encryption Changes the Data Security Game

Organizations know they must encrypt their most valuable, sensitive data to prevent data theft and breaches. They also understand that organizational data exists to be used. To be searched, viewed, and modified to keep businesses running. Unfortunately, our Network and Data Security Engineers were taught for decades that you just can't search or edit data while in an encrypted state. ... So why, now, is Searchable Encryption suddenly becoming a gold standard in critical private, sensitive, and controlled data security? According to Gartner, "The need to protect data confidentiality and maintain data utility is a top concern for data analytics and privacy teams working with large amounts of data. The ability to encrypt data, and still process it securely is considered the holy grail of data protection." Previously, the possibility of data-in-use encryption revolved around the promise of Homomorphic Encryption (HE), which has notoriously slow performance, is really expensive, and requires an obscene amount of processing power. However, with the use of Searchable Symmetric Encryption technology, we can process "data in use" while it remains encrypted and maintain near real-time, millisecond query performance.


How Cloud-Based Solutions Help Farmers Improve Every Season

At the start of each growing season, farmers can use previous years’ data to strategically plan where and when to plant seeds, identifying the areas of the field where plants often grow strongly or are typically not as prosperous. From there, planters equipped with robotics, sensors, and camera vision, augmented with field boundaries, guidance lines, and other data provided from the cloud, can precisely place hundreds of seeds per second at an optimal depth and with optimal spacing, avoiding losses from seeds being planted too shallow, deep, or close to another plant. ... Advanced machines gather a wide range of data to support the next step of nurturing plant growth. That data is critical, because while plants are growing, so are weeds. And weeds need to be treated in a timely manner to give crops the best possible conditions to grow. With access to the prior year’s data, farmers can anticipate where weeds are likely to grow and target them directly. Today’s sprayers use computer vision and machine learning to detect where weeds are located as the sprayer moves throughout a field, applying herbicide only where it is needed. This not only reduces costs but is also more sustainable.


Thinking Like an Architect

The world we're in is not simple. The applications we build today are complex because they are based on distributed systems, event-driven architectures, asynchronous processing, or scale-out and auto-scaling capabilities. While these are impressive capabilities, they add complexity. Models are an architect’s best tool to tackle complexity. Models are powerful because they shape how people think. Dave Farley illustrated this with an example: long ago, people believed the Earth was at the center of the universe and this belief made the planets' movements seem erratic and complicated. The real problem wasn't the planets' movements but using an incorrect model. When you place the sun at the center of the solar system, everything makes sense. Architects explaining things to others who operate differently may believe that others don't understand when they simply use a different mental model. ... Architects can make everyone else a bit smarter by seeing multiple dimensions. By expanding the problem and solution space, architects enable others to approach problems more intelligently. Often, disagreements arise when two parties view a problem from different angles, akin to debating between a square and a triangle without progress.


CrowdStrike Outage Could Cost Cyber Insurers $1.5 Billion

Most claims will center on losses due to "business interruption, which is a primary contributor to losses from cyber incidents," it said. "Because these losses were not caused by a cyberattack, claims will be made under 'systems failure' coverage, which is becoming standard coverage within cyber insurance policies." But, not all systems-failure coverage will apply to this incident, it said, since some policies exclude nonmalicious events or have to reach a certain threshold of losses before being triggered. The outage resembled a supply chain attack, since it took out multiple users of the same technology all at once - including airlines, doctors' practices, hospitals, banks, stock exchanges and more. Cyber insurance experts said the timing of the outage will also help mitigate the quantity of claims insurers are likely to see. At the moment CrowdStrike sent its update gone wrong, "more Asia-Pacific systems were online than European and U.S. systems, but Europe and the U.S. have a greater share of cyber insurance coverage than does the Asia-Pacific region," Moody's Reports said. The outage, dubbed "CrowdOut" by CyberCube, led to 8.5 million Windows hosts crashing to a Windows "blue screen of death" and then getting stuck in a constant loop of rebooting and crashing.


Open-source AI narrows gap with proprietary leaders, new benchmark reveals

As the AI arms race intensifies, with new models being released almost weekly, Galileo’s index offers a snapshot of an industry in flux. The company plans to update the benchmark quarterly, providing ongoing insight into the shifting balance between open-source and proprietary AI technologies. Looking ahead, Chatterji anticipates further developments in the field. “We’re starting to see large models that are like operating systems for this very powerful reasoning,” he said. “And it’s going to become more and more generalizable over the course of the next maybe one to two years, as well as see the context lengths that they can support, especially on the open source side, will start increasing a lot more. Cost is going to go down quite a lot, just the laws of physics are going to kick in.” He also predicts a rise in multimodal models and agent-based systems, which will require new evaluation frameworks and likely spur another round of innovation in the AI industry. As businesses grapple with the rapid pace of AI advancement, tools like Galileo’s Hallucination Index will likely play an increasingly crucial role in informing decision-making and strategy. 



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - March 24, 2024

How AI is changing scientific discovery

While the spread of misinformation is one of the many ways that AI is changing science, an even broader and more positive application of this technology is self-driving labs (SDL). In an SDL, AI selects new material formulations aided by robotic arms to synthesise new materials. While this technology is currently limited to discovering new materials, it relieves researchers of having to grapple with trillions of possible formulations. This greatly improves the labour productivity in science, saving time and money and allowing researchers more time to improve creative aspects such as experimental design. In fact, in April 2023, UofT was awarded Canada’s largest-ever research grant of $200 million in research funding towards Acceleration Consortium—a UofT-based network that aims to accelerate materials discovery through AI and robotics. Through this funding, autonomous labs are being built at UofT, such as in the Leslie Dan Faculty of Pharmacy where AI, automation, and advanced computing are used to iteratively test and develop material combinations for new drug formulations.


Need for upskilling and cross-skilling amongst cybersecurity professionals

The urgency of upskilling and cross-skilling is further underscored by the shortage of skilled cybersecurity professionals. In the post-pandemic years, the digital transformation across industries has resulted in a massive demand for cybersecurity professionals with the right skills. As per estimates, there are over 5.4 million cybersecurity professionals currently globally, and there are nearly 4 million job openings in the field. In fact, 67% of cybersecurity professionals have reported that their organizations face a shortage of adequately skilled personnel to secure their digital infrastructure. Even in India, where digital infrastructure has grown by leaps and bounds in the last two years, there is a major need to train more people on cutting-edge cybersecurity practices. As organizations struggle to fill critical cybersecurity roles, existing professionals must take the initiative to expand their skill sets. Training programs, certifications, and continuous learning opportunities can empower cybersecurity experts to bridge the gap between their current knowledge and the ever-evolving threat landscape.


Data Sovereignty and Digital Governance in APAC: What’s Next?

There is increasing diversity of data sources, datasets, workloads and the permeance of cloud and multi-cloud, and edge computing throughout the data management cycle. This coupled with more interconnected and decentralised organisations with embedded solutions and loosely coupled architectures has necessitated breaking down data silos while maintaining data and metadata quality. This hence can result in breaking down of geopolitical borders, due to the fact that data can get generated, processed, localised, stored, transferred, transformed and accessed across different countries. This necessitates knowledge and compliance to privacy and security laws of all applicable countries, especially in the cloud, edge, co-location infrastructure and on-premise ecosystems. Moreover, there are potentially complex situations and conflicts especially in case of variances in Data Protection Acts across countries such as data flows between EU-GDPR and the US Privacy laws. There could be additional different scenarios at federal or state levels. Similar considerations must be planned and executed in case of cross-border data flows, backup and disaster recovery.


Boards of directors: The final cybersecurity defense for industrials

First and foremost, board members provide oversight and guidance. They should ensure that executives and their teams set a high standard for cybersecurity. They should then follow through on achieving them by ensuring that security is embedded by design in digital products and that technology teams share responsibility for cybersecurity. The board is the last line of defense in ensuring such initiatives get planned and funded. Boards also look at risk prioritization and trade-offs. They are often intimidated when it comes to determining risk levels and giving fact-based inputs into risk trade-offs. In addition, the vocabulary and reporting capabilities used by security teams with their boards are often inconsistent and technical. As a result, it can be overwhelming for board members who want to contribute meaningfully to reducing cybersecurity risk but are not quite sure how. A board member does not need to have specific knowledge about cybersecurity to add value. Instead, they need to test and ask the cyber team about potential business impacts. This means the cyber team should equate cyber issues and controls with business risks.


Compliance meets AI: A banking love story

Most financial institutions are at preliminary stages in evaluating opportunities to use generative AI in their operations. Some of the areas where we are seeing the anticipated use of LLMs are in customer services. Large language models can interact with a bank’s customers in very natural conversations. Depending on the data that the bank trains the LLM on, the chat bots can answer questions about customer accounts and even provide recommended product offerings and investment advice. Several large banks are working with internal LLM models to capture call center notes, organize information for investment advisors and organize other product data for customer service reps, with plans to roll out to more customer-facing uses as extensive testing addresses potential risks. Banks are also assessing opportunities to improve internal operations. Generative AI capabilities enable new ways to analyze data. One practical use case for most organizations is to train LLMs on all the pockets of organizational information that employees need to access to do their jobs. 


Navigating fraud and AML challenges with innovation solutions in a new financial frontier

In the intricate domain of Anti-Money Laundering (AML) compliance, the quality of data plays a pivotal role. Wolters Kluwer’s CCH iFirm AML module underscores this by ensuring access to leading credit bureaus and governmental data sets. The accuracy, completeness, timeliness, consistency, and relevance of data are fundamental to the effective detection, prevention, and reporting of potential money laundering activities. High-quality data not only aids in identifying suspicious transactions more accurately but also enhances the efficiency of the compliance process. For CFOs, this means a significant reduction in the risk of non-compliance penalties and the fostering of trust with regulatory bodies. ... The advent of AI-boosted cyber threats poses a significant challenge for CFOs in 2024. Darktrace’s study reveals a stark reality: while 89% of IT security specialists anticipate these threats will significantly impact their organisations within the next two years, 60% admit to being ill-prepared to defend against them. The escalation in sophisticated phishing attacks, leveraging advanced language and punctuation, underscores the evolving nature of cyber threats. 


Prompt Injection Vulnerability in Google Gemini Allows for Direct Content Manipulation

The researchers say that the prompt injection attacks impact Gemini Advanced accessed by users with Google Workspace, and organizations that are making use of the Gemini API. The content manipulation risk is also said to more generally apply to world governments as it could be used to output inaccurate or falsified information about elections. The risk is particularly acute as Google Gemini has been trained on audio, video, images and code in addition to text. One of the central issues identified by the researchers is that it is relatively trivial to get Google Gemini to leak system prompt information. This is information about the “prime directives” of the AI model, so to speak, that should not be visible to service users. The researchers’ first prompt injection attack is to simply change the wording when asking the AI about this information, causing it to spit out its core rules when asked about its “foundational instructions” instead. Another exploit involving the system prompt is a seeming state of confusion that the AI can be thrown into by peppering it with many uncommon tokens.


RegTech solutions can be a game changer in fintech regulatory scrutiny

RegTech solutions allow businesses to create transparency and accountability within compliance procedures and ensure the timely conclusion of statutory obligations. Employers can stay on top of important changes and address them promptly with the help of compliance management software. Digital, authentic, and tamper-proof copies of all required compliance papers are stored conveniently. While onboarding any RegTech solution, the legal teams of the RegTech players conduct comprehensive compliance applicability assessments to identify the list of applicable acts and compliances. This helps in creating a list relevant to each financial institution.  RBI directives work to continuously evolve the regulatory landscape to keep up with innovations in technology and services. ... The RegTech space has been investing heavily in creating automation layers for compliance document generation and integration with the transaction systems to eliminate any manual touch points. Additionally, they are preparing themselves for API based filings as soon as the regulators are ready to adopt a GST like model and create an eco-system for RegTech players.  


Managing Technical Debt in Agile Environments

Usually, Technical debt occurs when teams rush to push new features within deadlines, by writing code without thinking about other considerations such as security, extensibility, etc. Over time, the tech debt increases and becomes difficult to manage. ... Code Debt: When we talk about tech debt, code debt is the first thing that comes to mind. It is due to bad coding practices, not following proper coding standards, insufficient code documentation, etc. This type of debt causes problems in terms of maintainability, extensibility, security, etc. Testing Debt: This occurs when the entire testing strategy is inadequate, which includes the absence of unit tests, integration tests, and adequate test coverage. This kind of debt causes us to lose confidence in pushing new code changes and increases the risk of defects and bugs surfacing in production, potentially leading to system failures and customer dissatisfaction. Documentation Debt: This manifests when documentation is either insufficient or outdated. It poses challenges for both new and existing team members in comprehending the system and the rationale behind certain decisions, thereby impeding efficiency in maintenance and development efforts.


Science Simplified: What Is Quantum Mechanics?

In a more general sense, the word ​“quantum” can refer to the smallest possible amount of something. The field of quantum mechanics deals with the most fundamental bits of matter, energy and light and the ways they interact with each other to make up the world. Unlike the way in which we usually think about the world, where we imagine things to have particle- or wave-like properties separately (baseballs and ocean waves, for example), such notions don’t work in quantum mechanics. Depending on the situation, scientists may observe the same quantum object as being particle-like or wave-like. For example, light cannot be thought of as only a photon (a light particle) or only a light wave, because we might observe both sorts of behaviors in different experiments. Day to day, we see things in one ​“state” at a time: here or there, moving or still, right-side up or upside down. The state of an object in quantum mechanics isn’t always so straightforward. For example, before we look to determine the locations of a set of quantum objects, they can exist in what’s called a superposition — or a special type of combination — of one or more locations. 



Quote for the day:

"Success comes from knowing that you did your best to become the best that you are capable of becoming." -- John Wooden

Daily Tech Digest - December 19, 2022

7 ways CIOs can build a high-performance team

“People want to grow and change, and good business leaders are willing to give them the opportunity to do so,” adds Cohn. Here, you can get HR involved, encouraging them to bring their expertise and ideas to the table to help you come up with the right approach to training and employee development. In addition, it’s important to remember that an empathetic leader understands that people come from different places and therefore won’t grow and develop in the same manner. Modern CIOs must approach upskilling and training with this reality in mind, advises Benjamin Marais, CIO at financial services company Liberty Group SA. You also need to create opportunities that expose your employees to what’s happening outside the business, suggests van den Berg. This is especially true where it pertains to future technologies and skills because if teams know what’s out there, they better understand what they need to do to keep up. Given the rise in competition for skills in the market, you have to demonstrate your best when trying to attract top talent and retain them, stresses Cohn. 


10 Trends in DevOps, Automated Testing and More for 2023

Developers and QA professionals are some of the most sought-after skilled laborers who are acutely aware of the value they provide to organizations. As we head into next year, this group will continue to leverage the demand for their skills in pursuit of their ideal work environment. Companies that do not consider their developer experience and force pre-pandemic systems onto a hybrid-first world set themselves up for failure, especially when tools for remote and virtual testing and quality assurance are readily available. Developer teams also need to be equally equipped for success through the tools and opportunities that can help ensure an innate sense of value to the organization – and if they don’t have the tools they need, these developers will find them elsewhere. ... We’re starting to see consolidation in both the market and in the user personas we’re all chasing. Testing companies are offering monitoring, and monitoring companies are offering testing. This is a natural outcome of the industry’s desire to move toward true observability: deep understanding of real-world user behavior, synthetic user testing, passively watching for signals and doing real-time root cause analysis—all in service of perfecting the customer experience.


The beautiful intersection of simulation and AI

Simulation models can synthesize real-world data that is difficult or expensive to collect into good, clean and cataloged data. While most AI models run using fixed parameter values, they are constantly exposed to new data that may not be captured in the training set. If unnoticed, these models will generate inaccurate insights or fail outright, causing engineers to spend hours trying to determine why the model is not working. ... Businesses have always struggled with time-to-market. Organizations that push a buggy or defective solution to customers risk irreparable harm to their brand, particularly startups. The opposite is true as “also-rans” in an established market have difficulty gaining traction. Simulations were an important design innovation when they were first introduced, but their steady improvement and ability to create realistic scenarios can slow perfectionist engineers. Too often, organizations try to build “perfect” simulation models that take a significant amount of time to build, which introduces the risk that the market will have moved on.


What is VPN split tunneling and should I be using it?

The ability to choose which apps and services use your VPN of choice and which don't is incredibly powerful. Activities like remote work, browsing your bank's website, or online shopping via public Wi-Fi can definitely benefit from the added security of a VPN, but other pursuits, like playing online games or streaming readily available content, can be hurt by the slight delay VPNs may add to your traffic. The modest decrease to your connection speed is barely noticeable for browsing, but can be disastrous for online games. Being able to simultaneously connect to sensitive sites and services through your secure VPN, and to non-sensitive games and apps means you won't constantly need to enable and disable your VPN connection when switching tasks. This is important as forgetting to enable it at the wrong time could leave you exposed to security risks. ... Split tunneling divides your network traffic in two. Your standard, unencrypted traffic continues to flow unimpeded down one path, while your sensitive and secured data gets encrypted and routed through the VPN's private network. It's like having a second network connection that's completely separate, a tiny bit slower, but also far more secure.


Why don’t cloud providers integrate?

Although it’s not an apples-to-apples comparison, Google’s Athos enables enterprises to run applications across clouds and other operating environments, including ones Google doesn’t control. As with Amazon DataZone, it’s very possible to manage third-party data sources. One senior IT executive from a large travel and hospitality company told me on condition of anonymity, “I’m sure [cloud vendors] can integrate with third-party services, but I suspect that’s not a choice they’re willing to make. For instance, they could publish some interfaces for third parties to integrate with their control plane as well as other means in the data plane.” Integration is possible, in other words, but vendors don’t always seem to want it. This desire to control sometimes leads vendors down roads that aren’t optimal for customers. As this IT executive said, “The ecosystem is being broken. Instead of interoperating with third-party services, [cloud vendors often] choose to create API-compatible competing services.” He continued, “There is a zero-sum game mindset here.” Namely, if a customer runs a third-party database and not the vendor’s preferred first-party database, the vendor has lost.


How RegTech helps financial services providers overcome regulation challenges

Two main types of RegTech capabilities are helping financial service institutions stay compliant: software that encompasses the whole system — for example a full client onboarding cycle — and software that manages a particular process, such as reporting or document management. Hugo Larguinho Brás explains: “The technologies that handle the whole process from A to Z are typically heavier to deploy, but they will allow you to cover most of your needs. These are also more expensive and often more difficult to adapt in line with a company’s specificities.” “Meanwhile, those technologies that treat part of the process can be combined with other tools. While this brings more agility, the need to find and combine several tools can also turn your target model more complex to run.” “We see more and more cloud and on-premises solutions available to asset management and securities companies, from software-as-a-service (SaaS) and platform-as-a-service (PaaS) deployed in-house, to solutions combined to outsourced capabilities ...”


What You Need to Know About Hyperscalers

Current hyperscaler adopters are primarily large enterprises. “The speed, efficiencies, and global reach hyperscalers can provide will surpass what most enterprise organizations can build within their own data centers,” Drobisewski says. He predicts that the partnerships being built today between hyperscalers and large enterprises are strategic and will continue to grow in value. “As hyperscalers maintain their focus on lifecycle, performance, and resiliency, businesses can consume hyperscaler services to thrive and accelerate the creation of new digital experiences for their customers,” Drobisewski says. ... Many adopters begin their hyperscaler migration by selecting the software applications that are best suited to run within a cloud environment, Hoecker says. Over time, these organizations will continue to migrate workloads to the cloud as their business goals evolve, he adds. Many hyperscaler adopters, as they become increasingly comfortable with the approach, are beginning to establish multi-cloud estates. “The decision criteria is typically based on performance, cost, security, access to skills, and regulatory and compliance factors,” Hoecker notes.


UID smuggling: A new technique for tracking users online

Researchers at UC San Diego have for the first time sought to quantify the frequency of UID smuggling in the wild, by developing a measurement tool called CrumbCruncher. CrumbCruncher navigates the Web like an ordinary user, but along the way, it keeps track of how many times it has been tracked using UID smuggling. The researchers found that UID smuggling was present in about 8 percent of the navigations that CrumbCruncher made. The team is also releasing both their complete dataset and their measurement pipeline for use by browser developers. The team’s main goal is to raise awareness of the issue with browser developers, said first author Audrey Randall, a computer science Ph.D. student at UC San Diego. “UID smuggling is more widely used than we anticipated,” she said. “But we don’t know how much of it is a threat to user privacy.” ... UID smuggling can have legitimate uses, the researchers say. For example, embedding user IDs in URLs can allow a website to realize a user is already logged in, which means they can skip the login page and navigate directly to content.


Bring Sanity to Managing Database Proliferation

How can you avoid being a victim of the bow wave of database proliferation? Recognize that you can allocate your resources in a way that benefits both your bottom line and your stress level by consolidating how you run and manage modern databases. Investing heavily in self-managing the legacy databases used in high volume by many of your people makes a lot of sense. Database workloads that are typically used for mission-critical transaction processing, such as IBM DB2 in financial services, are subject to performance tuning, regular patching and upgrading by specialized database administrators in a kind of siloed sanctum sanctorum. Many organizations will hire an in-house Oracle or SAP Hana expert and create a team, ... But what about the 40 other highly functional, highly desirable cloud databases in your enterprise that aren’t used as often? Do you need another 20 people to manage them? Open source databases like MySQL, MongoDB, Cassandra, PostgreSQL and many others have gained wide adoption, and many of their use cases are considered mission-critical. 


An Ode to Unit Tests: In Defense of the Testing Pyramid

What does the unit in unit tests mean? It means a unit of behavior. There's nothing in that definition dictating that a test has to focus on a single file, object, or function. Why is it difficult to write unit tests focused on behavior? A common problem with many types of testing comes from a tight connection between software structure and tests. That happens when the developer loses sight of the test goal and approaches it in a clear-box (sometimes referred to as white-box) way. Clear-box testing means testing with the internal design in mind to guarantee the system works correctly. This is really common in unit tests. The problem with clear-box testing is that tests tend to become too granular, and you end up with a huge number of tests that are hard to maintain due to their tight coupling to the underlying structure. Part of the unhappiness around unit tests stems from this fact. Integration tests, being more removed from the underlying design, tend to be impacted less by refactoring than unit tests. I like to look at things differently. Is this a benefit of integration tests or a problem caused by the clear-box testing approach? What if we had approached unit tests in an opaque-box approach?



Quote for the day:

"Strategy is not really a solo sport even if you_re the CEO." -- Max McKeown

Daily Tech Digest - December 09, 2022

Why CIOs must think of themselves as products—and hostage negotiators

Despite the growing breadth of the CIO’s role, Tyler believes that technology executives can go further still, extending their value and influence within the organization by thinking of themselves less of a service provider and more of a ‘powerful, valuable product’, which senior executives, partners and peers need to do their jobs effectively. “Product value proposition in business terms is something we develop when we’re trying to create the next generation of goods and services for our customers or citizens—any stakeholder we’re working with,” he said ... “Your leadership is a product that all of your executive team, partners, peers, and all of the people in your organization needs,” he said. Tyler also suggested that CIOs must build their own product value proposition to deliver the maximum value to the business, and make a promise to stakeholders of how technology will help them achieve their desired outcomes, adding that technology leaders can take simple steps to start by understanding who consumes IT, and by deeply understanding their jobs, and how IT can remove pains and create gains


Digital transformation trends in 2023

Automation adoption is often viewed as one of the best methods that companies can employ to streamline processes and boost revenues without causing costs to spiral out of control. Recognising the benefits that can be brought by automation, 54 per cent of organisations have already begun implementing robotic process automation [RPA] into their processes, according to a Deloitte survey. With the economy in such dire straits, and the landscape continuing to look so grim for businesses, it is highly likely that we will see a greater number of companies investing in this technology than ever before. Not only will implementing automation be an affordable alternative to investing in a full digital transformation project for many businesses, it will also provide a basis upon which to create new efficiencies in the years to come. Indeed, Gartner predicts that, by 2024, hyper automation will enable organisations to lower their operational costs by 30 per cent. At a time of great economic uncertainty, when every penny that businesses spend needs to be clearly justified, automation is a proven and dependable cost saver.


5 Ways to Embrace Next-Generation AI

While AI can solve big problems, it doesn’t have to do so all at once. “In the projects I’ve seen successful from our customers or internally, it is just getting moving with the tools you have or small investments and then growing from there,” Mark Maughan, chief analytics officer at cloud business intelligence platform Domo, asserted. “Just get moving. Get started. Test, learn, grow, and iterate.” ... A significant part of gaining that buy-in is having the talent that can communicate effectively with the various stakeholders. Leveraging storytelling to illustrate the problem and how AI can solve it is a powerful tool. The earlier organizations engage relevant stakeholders, the more likely the project is to be successful. That ability to communicate may or may not come naturally, but it can be learned. “One thing that we've always found very helpful is either rotations or shadowing for anyone in the data science analytic organization around the different business and operational stakeholder groups to get a better understanding of what is actually going on,” Finnerty said.


The cybersecurity challenges and opportunities of digital twins

Unfortunately, while CISOs should be key stakeholders in digital twin projects, they are almost never the ultimate decision maker, says Alfonso Velosa, research vice president of IoT at Gartner. “Since digital twins are tools to drive business process transformation, the business or operational unit will often lead the initiative. Most digital twins are custom-built to address a specific business requirement,” he says. When an enterprise buys a new smart asset, whether a truck, backhoe, elevator, compressor, or freezer, it will often come with a digital twin, according to Velosa. “Most of the operational teams will need a streamlined and cross-IT—not just CISO—set of support to integrate them into their broader business processes and to manage security.” If proper cybersecurity controls aren’t put in place, digital twins can expand a company’s attack surface, give threat actors access to previously inaccessible control systems, and expose pre-existing vulnerabilities. When the digital twin of a system is created, the potential attack surface effectively doubles—adversaries can go after the systems themselves or attack the digital twin of that system.


RegTech Can Help Solve ESG Data Management and Trust Challenges

The benefits that RegTech offers sustainability data officers go beyond mere automation of processes; they also include a guarantee that the data will be seamlessly integrated into a financial institution’s broader data assets and can ensure that it is traceable and auditable. Katie Carrasco agreed. The head of ESG at Global Innovation Fund, an impact investment vehicle said that RegTech could provide a firm with a 360-degree view of its ESG data, a matter that’s critical to good governance, which in turn is important in ensuring the accurate identification of opportunities and risks. Data traceability and auditability will also provide for credibility, argued Mary Anne Bullock, global strategic account director for Solidatus. At a time when greenwashing is making headlines and undermining the ESG project, Bullock said that being able to trace data from source to use-case would help demonstrate its veracity, offer transparency into firms’ activities and build trust. Building trust comes down to credible metrics, said Seethepalli, and that would come when auditability is added into the data management mix. 


As Complexity Challenges Security, Is Time the Solution?

"Complexity leaves me in a very depressing place; complexity is just forever increasing," said Moss, who's the founder of Black Hat and regularly opens the conference by detailing leading challenges as well as potential solutions. For addressing complexity, he said, "time has got me pretty excited." Simply put, being strategic about doing things faster - including detection and recovery - gives organizations one tactic to blunt the impact of increased complexity. Not all complexity involves technological evolution, such as malware built to better evade defenses, or criminals wielding zero-day exploits. Researchers debuted last week ChatGPT, a prototype, conversational AI chatbot that can sometimes appear to be human. This means added complexity for security professionals, since many security tools use attackers' poor command of English to detect and block phishing attacks. Expect criminals to soon use tools such as ChatGPT to write lures that seem to have been crafted by a native speaker, said Daniel Cuthbert, a veteran cybersecurity researcher who's a member of the U.K. government's new cyber advisory board.


Meta’s behavioral ads will finally face GDPR privacy reckoning in January

If Meta is forced to ask users if they want “personalized” ads (its favored euphemism for surveillance ads), that is definitely big news — given that rates of denials when web users are actually given a choice over targeted ads are typically very high. The crux of noyb’s original complaints against Meta services was that users were not offered a choice to deny its processing for advertising — despite the GDPR stipulating that if consent is the legal basis being claimed for processing personal data, it must be specific, informed and freely given. However — plot twist! — it later emerged that as the GDPR came into application, Meta had quietly switched from claiming consent as its legal basis for this behavioral advertising processing to saying it is necessary for the performance of a contract — and claiming users of Facebook and Instagram are in a contract with Meta to receive targeting ads. This argument implies that Meta’s core service is not social networking; it’s behavioral advertising. Max Schrems, noyb’s honorary chairman and long-time privacy law thorn in Facebook’s side, has called this an exceptionally shameless attempt to bypass the GDPR.


Introduction to Interface-Driven Development (IDD)

This concept already existed in some areas, such as Protocol-oriented programming in Swift or Interface-based programming in Java, and it is based on Design by Contract by Bertrand Meyer, described in his book “Object-Oriented Software Construction”. In the book, he discusses standards for contracts between a method and a caller. Also, Hunt and Thomas rely upon a similar concept in their “The Pragmatic Programmer” book, in the section on Prototyping Architecture: “Most prototypes are constructed to model the entire system under consideration. As opposed to tracer bullets, none of the modules in the prototype system need to be particularly functional. What you are looking for is how the system hangs together as a whole, again deferring details.“ The problem that this process needs to solve is components that are vaguely defined during design, and we tend to give more responsibility to some components than is necessary. A usual implication of such design is bad and untestable code.


Leveraging the full potential of zero trust

In line with the motivations behind cloud migration, Zscaler found that a focus on wider strategic outcomes is missing from how organizations are planning emerging technology initiatives. Regarding the single most challenging aspect of implementing emerging technology projects, 30% cited adequate security, followed by budget requirements for further digitization (23%). However, only 19% cited dependency on strategic business decisions as a challenge. While budget concerns are natural, the focus on securing the network while ignoring strategic business alignment suggests organizations are focused on security without a full understanding of its business benefit, and that zero trust itself is not yet understood as a business enabler. “The state of zero trust transformation within organizations today is promising – implementation rates are strong,” said Nathan Howe, VP of Emerging Tech, 5G at Zscaler. “But organizations could be more ambitious. There’s an incredible opportunity for IT leaders to educate business decision-makers on zero trust as a high-value business driver, especially as they grapple with providing a new class of hybrid workplace or production environment and reliant on a range of emerging technologies, such as IoT and OT, 5G and even the metaverse.


Going from Architect to Architecting: the Evolution of a Key Role

A fundamental principle of today’s software architecture is that it's an evolutionary journey, with varying routes and many influences. That evolution means we change our thinking based on what we learn, but the architect has a key role in enabling that conversation to happen. ... The architect playing a sole role in the software development game is no longer the case. Architecting a system is now a team sport. That team is the cross functional capability that now delivers a product and is made up of anyone who adds value to the overall process of delivering software, which still includes the architect. Part of the reason behind this, as discussed earlier, is that the software development ecosystem is a polyglot of technologies, languages (not only development languages, also business and technical), experiences (development and user) and stakeholders. No one person can touch all bases. This change has surfaced the need for a mindset shift for the architect; working as part of a team has great benefits but in turn has its challenges.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor

Daily Tech Digest - April 30, 2021

Tech to the aid of justice delivery

Obsolete statutes which trigger unnecessary litigation need to be eliminated as they are being done currently with over 1,500 statutes being removed in the last few years. Furthermore, for any new legislation, a sunset review clause should be made a mandatory intervention, such that after every few years, it is reviewed for its relevance in the society. A corollary to this is scaling decriminalisation of minor offences after determining as shown by Kadish SH in his seminal paper ‘The Crisis of Overcriminalization’, whether the total public and private costs of criminalisation outweigh the benefits? Non-compliance with certain legal provisions which don’t involve mala fide intent can be addressed through monetary compensation rather than prison time, which inevitably instigates litigation. Finally, among the plethora of ongoing litigations in the Indian court system, a substantial number are those that don’t require interpretation of the law by a judge, but simply adjudication on facts. These can take the route of ODR, which has the potential for dispute avoidance by promoting legal education and inducing informed choices for initiating litigation and also containment by making use of mediation, conciliation or arbitration, and resolving disputes outside the court system.


Leading future-ready organizationsTo break through these barriers to Agile, companies need a restart. 

They need to continue to expand on the initial progress they’ve made but focus on implementing a wider, more holistic approach to Agile. Every aspect of the organization must be engaged in an ongoing cyclical process of “discover and evaluate, prioritize, build and operate, analyze…and repeat.” ... Organizations that leverage digital decoupling are able to get on independent release cycles and unlock new ways of working with legacy systems. Based on our work with clients, we’ve seen that this can result in up to 30% reduction in cost of change, reduced coordination overhead, and increased speed of planning and pace of delivery. ... In our work with clients, we see firsthand how cross-functional teams and automation of application delivery and operations contributes to increased pace of delivery, improved employee productivity, and up to 30% reduction in deployment time. Additionally, scaling DevOps enables fast and reliable releases of new features to production within short iterations and includes optimizing processes and upskilling people, which is the starting point for a collaborative and liquid enterprise. .... Moving talent and partners into a non-hierarchal and blended talent sourcing and management model can result in 10-20% increase in capacity.


F5 Big-IP Vulnerable to Security-Bypass Bug

The vulnerability specifically exists in one of the core software components of the appliance: The Access Policy Manager (APM). It manages and enforces access policies, i.e., making sure all users are authenticated and authorized to use a given application. Silverfort researchers noted that APM is sometimes used to protect access to the Big-IP admin console too. APM implements Kerberos as an authentication protocol for authentication required by an APM policy, they explained. “When a user accesses an application through Big-IP, they may be presented with a captive portal and required to enter a username and password,” researchers said, in a blog posting issued on Thursday. “The username and password are verified against Active Directory with the Kerberos protocol to ensure the user is who they claim they are.” During this process, the user essentially authenticates to the server, which in turn authenticates to the client. To work properly, KDC must also authenticate to the server. KDC is a network service that supplies session tickets and temporary session keys to users and computers within an Active Directory domain.


4 Business Benefits of an Event-Driven Architecture (EDA)

Using an event-driven architecture can significantly improve developmental efficiency in terms of both speed and cost. This is because all events are passed through a central event bus, which new services can easily connect with. Not only can services listen for specific events, triggering new code where appropriate, but they can also push events of their own to the event bus, indirectly connecting to existing services. ... If you want to increase the retention and lifetime value of customers, improving your application’s user experience is a must. An event-driven architecture can be incredibly beneficial to user experience (albeit indirectly) since it encourages you to think about and build around… events! ... Using an event-driven architecture can also reduce the running costs of your application. Since events are pushed to services as they happen, there’s no need for services to poll each other for state changes continuously. This leads to significantly fewer calls being made, which reduces bandwidth consumption and CPU usage, ultimately translating to lower operating costs. Additionally, those using a third-party API gateway or proxy will pay less if they are billed per-call.


Gartner says low-code, RPA, and AI driving growth in ‘hyperautomation’

Gartner said process-agnostic tools such as RPA, LCAP, and AI will drive the hyperautomation trend because organizations can use them across multiple use cases. Even though they constitute a small part of the overall market, their impact will be significant, with Gartner projecting 54% growth in these process-agnostic tools. Through 2024, the drive toward hyperautomation will lead organizations to adopt at least three out of the 20 process-agonistic types of software that enable hyperautomation, Gartner said. The demand for low-code tools is already high as skills-strapped IT organizations look for ways to move simple development projects over to business users. Last year, Gartner forecast that three-quarters of large enterprises would use at least four low-code development tools by 2024 and that low-code would make up more than 65% of application development activity. Software automating specific tasks, such as enterprise resource planning (ERP), supply chain management, and customer relationship management (CRM), will also contribute to the market’s growth, Gartner said.


When cryptography attacks – how TLS helps malware hide in plain sight

Lots of things that we rely on, and that are generally regarded as bringing value, convenience and benefit to our lives…can be used for harm as well as good. Even the proverbial double-edged sword, which theoretically gave ancient warriors twice as much fighting power by having twice as much attack surface, turned out to be, well, a double-edged sword. With no “safe edge” at the rear, a double-edged sword that was mishandled, or driven back by an assailant’s counter-attack, became a direct threat to the person wielding it instead of to their opponent. ... The crooks have fallen in love with TLS as well. By using TLS to conceal their malware machinations inside an encrypted layer, cybercriminals can make it harder for us to figure out what they’re up to. That’s because one stream of encrypted data looks much the same as any other. Given a file that contains properly-encrypted data, you have no way of telling whether the original input was the complete text of the Holy Bible, or the compiled code of the world’s most dangerous ransomware. After they’re encrypted, you simply can’t tell them apart – indeed, a well-designed encryption algorithm should convert any input plaintext into an output ciphertext that is indistinguishable from the sort of data you get by repeatedly rolling a die.


Decoupling Software-Hardware Dependency In Deep Learning

Working with distributed systems, data processing such as Apache Spark, Distributed TensorFlow or TensorFlowOnSpark, adds complexity. The cost of associated hardware and software go up too. Traditional software engineering typically assumes that hardware is at best a non-issue and at worst a static entity. In the context of machine learning, hardware performance directly translates to reduced training time. So, there is a great incentive for the software to follow the hardware development in lockstep. Deep learning often scales directly with model size and data amount. As training times can be very long, there is a powerful motivation to maximise performance using the latest software and hardware. Changing the hardware and software may cause issues in maintaining reproducible results and run up significant engineering costs while keeping software and hardware up to date. Building production-ready systems with deep learning components pose many challenges, especially if the company does not have a large research group and a highly developed supporting infrastructure. However, recently, a new breed of startups have surfaced to address the software-hardware disconnect.


4 tips for launching a successful data strategy

Your business partners know that data can be powerful, and they know that they want it, but they do not always know, specifically, what data they need and how to use it. The IT organization knows how to collect, structure, secure, and serve up the data, but they are not typically responsible for defining how best to leverage the data. This gap between serving up the data and using the data can be as wide as the Ancient Mariner’s ocean (sorry), over which the CIO needs to build a bridge. ... But how do we attract those brilliant data scientists who can build the data dashboard straw man? To counter the challenge of a really tight market for these rare birds, Nick Daffan, CIO of Verisk Analytics, suggests giving data scientists what we all want: interesting work that creates an impact. “Data scientists want to get their hands on data that has both depth and breadth, and they want to work with the most advanced tools and methods," Daffan says. "They also want to see their models implemented, which means being able to help their business partners and customers use the data in a productive way.”


How to boost internal cyber security training

A big part of maintaining engagement among staff when it comes to cyber security is explaining how the consequences of insufficient protection could affect employees in particular. “Unless individuals feel personally invested, they tend not to concern themselves with the impact of a breach,” said James Spiteri, principal security specialist at Elastic. “Provide training that moves beyond theory and shows the risks and implications through actual practice to help engage the individual. For example, simulating an attack to show how an insecure password or bad security hygiene on personal accounts can lead to unwanted access of people’s personal information such as photos or payment details could be very effective in changing behaviours. “Teams need to find relatable tools to help break down the complexities of cyber security. Showcasing cyber security problems through relatable items like phones, and everyday situations such as connecting to public Wi-fi, can help spread awareness of employees’ digital footprint and how easy it is to spread information without being aware of it.”


Shedding light on the threat posed by shadow admins

Threat actors seek shadow admin accounts because of their privilege and the stealthiness they can bestow upon attackers. These accounts are not part of a group of privileged users, meaning their activities can go unnoticed. If an account is part of an Active Directory (AD) group, AD admins can monitor it, and unusual behaviour is therefore relatively straightforward to pinpoint. However, shadow admins are not members of a group since they gain a particular privilege by a direct assignment. If a threat actor seizes control of one of these accounts, they immediately have a degree of privileged access. This access allows them to advance their attack subtly and craftily seek further privileges and permissions while escaping defender scrutiny. Leaving shadow admin accounts on an organization’s AD is a considerable risk that’s best compared to handing over the keys to one’s kingdom to do a particular task and then forgetting to track who has the keys and when to ask for it back. It pays to know who exactly has privileged access, which is where AD admin groups help. Conversely, the presence of shadow admin accounts could be a sign that an attack is underway.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor