Showing posts with label SSL/TLS. Show all posts
Showing posts with label SSL/TLS. Show all posts

Daily Tech Digest - December 03, 2025


Quote for the day:

“The only true wisdom is knowing that you know nothing.” -- Socrates


How CISOs can prepare for the new era of short-lived TLS certificates

“Shorter certificate lifespans are a gift,” says Justin Shattuck, CSO at Resilience. “They push people toward better automation and certificate management practices, which will later be vital to post-quantum defense.” But this gift, intended to strengthen security, could turn into a curse if organizations are unprepared. Many still rely on manual tracking and renewal processes, using spreadsheets, calendar reminders, or system admins who “just know” when certificates are due to expire. ... “We’re investing in a living cryptographic inventory that doesn’t just track SSL/TLS certificates, but also keys, algorithms, identities, and their business, risk, and regulatory context within our organization and ties all of that to risk,” he says. “Every cert is tied to an owner, an expiration date, and a system dependency, and supported with continuous lifecycle-based communication with those owners. That inventory drives automated notifications, so no expiration sneaks up on us.” ... While automation is important as certificates expire more quickly, how it is implemented matters. Renewing a certificate a fixed number of days before expiration can become unreliable as lifespans change. The alternative is renewing based on a percentage of the certificate’s lifetime, and this method has an advantage: the timing adjusts automatically when the lifespan shortens. “Hard-coded renewal periods are likely to be too long at some point, whereas percentage renewal periods should be fine,” says Josh Aas.


How Enterprises Can Navigate Privacy With Clarity

There's an interesting pattern across organizations of all sizes. When we started discussing DPDPA compliance a year ago, companies fell into two buckets: those already building toward compliance and others saying they'd wait for the final rules. That "wait and see period" taught us a lot. It showed how most enterprises genuinely want to do the right thing, but they often don't know where to start. In practice, mature data protection starts with a simple question that most enterprises haven't asked themselves: What personal data do we have coming in? Which of it is truly personal data? What are we doing with it? ... The first is how enterprises understand personal data itself. I tell clients not to view personal data as a single item but as part of an interconnected web. Once one data point links to another, information that didn't seem personal becomes personal because it's stored together or can be easily connected. ... The second gap is organizational visibility. Some teams process personal data in ways others don't know about. When we speak with multiple teams, there's often a light bulb moment where everyone realizes that data processing is happening in places they never expected. The third gap is third-party management. Some teams may share data under basic commercial arrangements or collect it through processes that seem routine. An IT team might sign up for a new hosting service without realizing it will store customer personal data. 


How to succeed as an independent software developer

Income for freelance developers varies depending on factors such as location, experience, skills, and project type. Average pay for a contractor is about $111,800 annually, according to ZipRecruiter, with top earners making potentially more than $151,000. ... “One of the most important ways to succeed as an independent developer is to treat yourself like a business,” says Darian Shimy, CEO of FutureFund, a fundraising platform built for K-12 schools, and a software engineer by trade. “That means setting up an LLC or sole proprietorship, separating your personal and business finances, and using invoicing and tax tools that make it easier to stay compliant,” Shimy says. ... “It was a full-circle moment, recognition not just for coding expertise, but for shaping how developers learn emerging technologies,” Kapoor says. “Specialization builds identity. Once your expertise becomes synonymous with progress in a field, opportunities—whether projects, media, or publishing—start coming to you.” ... Freelancers in any field need to know how to communicate well, whether it’s through the written word or conversations with clients and colleagues. If a developer communicates poorly, even great talent might not make the difference in landing gigs. ... A portfolio of work tells the story of what you bring to the table. It’s the main way to showcase your software development skills and experience, and is a key tool in attracting clients and projects. 


AI in 5 years: Preparing for intelligent, automated cyber attacks

Cybercriminals are increasingly experimenting with autonomous AI-driven attacks, where machine agents independently plan, coordinate, and execute multi-stage campaigns. These AI systems share intelligence, adapt in real time to defensive measures, and collaborate across thousands of endpoints — functioning like self-learning botnets without human oversight. ... Recent “vibe hacking” cases showed how threat actors embedded social-engineering goals directly into AI configurations, allowing bots to negotiate, deceive, and persist autonomously. As AI voice cloning becomes indistinguishable from the real thing, verifying identity will shift from who is speaking to how behaviourally consistent their actions are, a fundamental change in digital trust models. ... Unlike traditional threats, machine-made attacks learn and adapt continuously. Every failed exploit becomes training data, creating a self-improving threat ecosystem that evolves faster than conventional defences. Check Point Research notes that AI-driven tools like Hexstrike-AI framework, originally built for red-team testing, was weaponised within hours to exploit Citrix NetScaler zero-days. These attacks also operate with unprecedented precision. ... Make DevSecOps a standard part of your AI strategy. Automate security checks across your CI/CD pipeline to detect insecure code, exposed secrets, and misconfigurations before they reach production. 


Threat intelligence programs are broken, here is how to fix them

“An effective threat intelligence program is the cornerstone of a cybersecurity governance program. To put this in place, companies must implement controls to proactively detect emerging threats, as well as have an incident handling process that prioritizes incidents automatically based on feeds from different sources. This needs to be able to correlate a massive amount of data and provide automatic responses to enhance proactive actions,” says Carlos Portuguez ... Product teams, fraud teams, governance and compliance groups, and legal counsel often make decisions that introduce new risk. If they do not share those plans with threat intelligence leaders, PIRs become outdated. Security teams need lines of communication that help them track major business initiatives. If a company enters a new region, adopts a new cloud platform, or deploys an AI capability, the threat model shifts. PIRs should reflect that shift. ... Manual analysis cannot keep pace with the volume of stolen credentials, stealer logs, forum posts, and malware data circulating in criminal markets. Security engineering teams need automation to extract value from this material. ... Measuring threat intelligence remains a challenge for organizations. The report recommends linking metrics directly to PIRs. This prevents metrics that reward volume instead of impact. ... Threat intelligence should help guide enterprise risk decisions. It should influence control design, identity practices, incident response planning, and long term investment.


Europe’s Digital Sovereignty Hinges on Smarter Regulation for Data Access

Europe must seek to better understand, and play into, the reality of market competition in the AI sector. Among the factors impacting AI innovation, access to computing power and data are widely recognized as most crucial. While some proposals have been made to address the former, such as making the continent’s supercomputers available to AI start-ups, little has been proposed with regard to addressing the data access challenge. ... By applying the requirement to AI developers independently of their provenance, the framework ensures EU competitiveness is not adversely impacted. On the contrary, the approach would enable EU-based AI companies to innovate with legal certainty, avoiding the cost and potential chilling effect of lengthy lawsuits compared to their US competitors. Additionally, by putting the onus on copyright owners to make their content accessible, the framework reduces the burden for AI companies to find (or digitize) training material, which affects small companies most. ... Beyond addressing a core challenge in the AI market, the example of the European Data Commons highlights how government action is not just a zero-sum game between fostering innovation and setting regulatory standards. By scrapping its digital regulation in the rush to boost the economy and gain digital sovereignty, the EU is surrendering its longtime ambition and ability to shape global technology in its image.


New training method boosts AI multimodal reasoning with smaller, smarter datasets

Recent advances in reinforcement learning with verifiable rewards (RLVR) have significantly improved the reasoning abilities of large language models (LLMs). RLVR trains LLMs to generate chain-of-thought (CoT) tokens (which mimic the reasoning processes humans use) before generating the final answer. This improves the model’s capability to solve complex reasoning tasks such as math and coding. Motivated by this success, researchers have applied similar RL-based methods to large multimodal models (LMMs), showing that the benefits can extend beyond text to improve visual understanding and problem-solving across different modalities. ... According to Zhang, the step-by-step process fundamentally changes the reliability of the model's outputs. "Traditional models often 'jump' directly to an answer, which means they explore only a narrow portion of the reasoning space," he said. "In contrast, a reasoning-first approach forces the model to explicitly examine multiple intermediate steps... [allowing it] to traverse much deeper paths and arrive at answers with far more internal consistency." ... The researchers also found that token efficiency is crucial. While allowing a model to generate longer reasoning steps can improve performance, excessive tokens reduce efficiency. Their results show that setting a smaller "reasoning budget" can achieve comparable or even better accuracy, an important consideration for deploying cost-effective enterprise applications.


Why Firms Can’t Ignore Agentic AI

The danger posed by agentic AI stems from its ability carry out specific tasks with limited oversight. “When you give autonomy to a machine to operate within certain bounds, you need to be confident of two things: That it has been provided with excellent context so it knows how to make the right decisions – and that it is only completing the task asked of it, without using the information it’s been trusted with for any other purpose,” James Flint, AI practice lead at Securys, said. Mike Wilkes, enterprise CISO, Aikido Security, describes agentic AI as “giving a black box agent the ability to plan, act, and adapt on its own.” “In most companies that now means a new kind of digital insider risk with highly-privileged access to code, infrastructure, and data,” he warns. When employees start to use the technology without guardrails, shadow agentic AI introduces a number of risks. ... Adding to the risk, agentic AI is becoming easier to build and deploy. This will allow more employees to experiment with AI agents – often outside IT oversight, creating new governance and security challenges, says Mistry. Agentic AI can be coupled with the recently open-sourced Model Context Protocol (MCP), a protocol released by Anthropic that provides an open standard for orchestrating connections between AI assistants and data sources. By streamlining the work of development and security teams, this can “turbocharge productivity,” but it comes with caveats, says Pieter Danhieux, co-founder and CEO of Secure Code Warrior.


Why supply chains are the weakest link in today’s cyber defenses

One of the key reasons is that attackers want to make the best return on their efforts, and have learned that one of the easiest ways into a well-defended enterprise is through a partner. No thief would attempt to smash down the front door of a well-protected building if they could steal a key and slip in through the back. There’s also the advantage of scale: one company providing IT, HR, accounting or sales services to multiple customers may have fewer resources to protect itself, that’s the natural point of attack. ... When the nature of cyber risks changes so quickly, yearly audits of suppliers can’t provide the most accurate evidence of their security posture. The result is an ecosystem built on trust, where compliance often becomes more of a comfort blanket. Meanwhile, attackers are taking advantage of the lag between each audit cycle, moving far faster than the verification processes designed to stop them. Unless verification evolves into a continuous process, we’ll keep trusting paperwork while breaches continue to spread through the supply chain. ... Technology alone won’t fix the supply chain problem, and a change in mindset is also needed. Too many boards are still distracted by the next big security trend, while overlooking the basics that actually reduce breaches. Breach prevention needs to be measured, reported and prioritized just like any other business KPI. 


How AI Is Redefining Both Business Risk and Resilience Strategy

When implemented across prevention and response workflows, automation reduces human error, frees analysts’ time and preserves business continuity during high-pressure events. One applicable example includes automated data-restore sequences, which validate backup integrity before bringing systems online. Another example involves intelligent network rerouting that isolates subnets while preserving service. Organizations that deploy AI broadly across prevention and response report significantly lower breach costs. ... Biased AI models can produce skewed outputs which lead to poor decisions during a crisis. When a model is trained on limited or biased historical data, it can favor certain groups, locations or signals and then recommend actions overlook real need. In practical terms, this can mean an automated triage system that routes emergency help away from underserved neighborhoods. ... Turn risk controls into operational patterns. Use staged deployments, automated rollback triggers and immutable model artifacts that map to code and data versions. Those practices reduce the likelihood an unseen model change will result in a system outage. Next, pair AI systems with fallbacks for critical flows. This step ensures core services can continue if models fail. Monitoring should also be a consideration. It should display model metrics, such as drift and input distribution, alongside business measures, including latency and error rates. 

Daily Tech Digest - September 22, 2024

Cloud Exit: 42% of Companies Move Data Back On-Premises

Agarwal said: ‘Nobody is running a cloud business as a charity.’ When businesses reach a size where it is economically viable, constructing their own infrastructure can save significant costs while eliminating the ‘cloud middleman’ and associated expenses. That said, the cloud is certainly not “Just someone else’s computer,” as the joke goes. It has added immense value to those who adapted to it. But like artificial intelligence (AI), it has been mythologized and exaggerated as the ultimate tool for efficiency — romanticized to the point where pervasive myths about cost-effectiveness, reliability, and security are enough for businesses to dive headfirst into adoption. These myths are frequently discussed in high-profile forums, shaping perceptions that may not always align with reality, leading many to commit without fully considering potential drawbacks and real-world challenges. ... Avoidable charges and cloud waste were another noteworthy issue revealed in the 2023 State of Cloud Strategy Survey by Hashicorp. 94% of respondents in this survey reported incurring unnecessary expenses because of the underutilization of cloud resources. These costs often result from maintaining idle resources that do not cater to any of the company’s actual operational needs. 


Revitalize aging data centers

Before tackling the specifics of upgrading a data center, it is important to conduct a thorough assessment to identify the specific needs and areas for improvement. This assessment should examine the data center's existing infrastructure, including server capacity, storage solutions, and energy consumption. It is also important to evaluate how these elements stack up against current power standards, grid connection requirements, efficiency benchmarks, and environmental and permit regulations. By benchmarking against newer facilities, operators can identify key areas where technological and infrastructural enhancements are needed. ... While integrating the latest server technologies might seem obvious, these systems demand different support from existing infrastructure. The increased computational loads should not compromise system reliability. Therefore, transitioning to newer generations of processors can result in updates of your data center support infrastructure. This includes upgrading power distribution units (PDUs) to handle higher power densities, enhancing network infrastructure to support faster data transfer rates, and reinforcing structural components to accommodate the increased weight and space requirements of modern equipment.


Personhood: Cybersecurity’s next great authentication battle as AI improves

Although intriguing, the personhood plan has fundamental issues. First, credentials are very easily faked by gen AI systems. Second, customers may be hard-pressed to take the significant time and effort to gather documents and wait in line at a government office to prove that they are human simply to visit public websites or sales call centers. Some argue that the mass creation of humanity cookies would create another pivotal cybersecurity weak spot. “What if I get control of the devices that have the humanity cookie on it?” FaceTec’s Meier asks. “The Chinese might then have a billion humanity cookies at one person’s control.” Brian Levine, a managing director for cybersecurity at Ernst & Young, believes that, while such a system might be helpful in the short run, it likely won’t effectively protect enterprises for long. “It’s the same cat-and-mouse game” that cybersecurity vendors have always played with attackers, Levine says. ... Sandy Carielli, a Forrester principal analyst and lead author of the Forrester bot report, says a critical element of any bot defense program is to not delay good bots, such as legitimate search engine spiders, in the quest to block bad ones.“The crux of any bot management system has to be that it never introduces friction for good bots and certainly not for legitimate customers. 


What’s behind the return-to-office demands?

The effect is clear: an average employee wants to work three days a week in the office, while managers want them there four days. The managers win, of course: today half of all civil servants in Stockholm County work in the office four days a week, a clear increase. There are different conclusions one can draw. Mine are these: Physical workplaces and physical interaction are better than digital workspaces and meetings when it comes to creative tasks and social/cultural togetherness. I think, depending on what you work with, employees and managers are quite in agreement. Leadership in the hybrid work models has not developed in the ways and at the pace required. Managers still have an excessive need for control, with no way to deal with this without trying to return to what was previously comfortable. Employees have probably not managed to convey to their bosses the positive aspects of home work — for the employer. It’s great that your life puzzle is easier and you can take power walks and do laundry, but how does that help the company? It’s no wonder that whispering about sneaky vacations is taking off. And there’s an elephant in the room we should talk about — people really hate open office spaces and activity-based workplaces.


Passwordless AND Keyless: The Future of (Privileged) Access Management

Because SSH keys are functionally different from passwords, traditional PAMs don't manage them very well. Legacy PAMs were built to vault passwords, and they try to do the same with keys. Without going into too much detail about key functionality (like public and private keys), vaulting private keys and handing them out at request simply doesn't work. Keys must be secured at the server side, otherwise keeping them under control is a futile effort. Furthermore, your solution needs to discover keys first to manage them. Most PAMs can't. There are also key configuration files and other key(!) elements involved that traditional PAMs miss. ... Let's come back to the topic of passwords. Even if you have them vaulted, you aren't managing them in the best possible way. Modern, dynamic environments - using in-house or hosted cloud servers, containers, or Kubernetes orchestration - don't work well with vaults or with PAMs that were built 20 years ago. This is why we offer modern ephemeral access where the secrets needed to access a target are granted just-in-time for the session, and they automatically expire once the authentication is done. This leaves no passwords or keys to manage - at all.


Cybersecurity is Beyond Protecting Personal Data

Cyberattacks are not just about stealing personal data; they also involve stealing intellectual property and sensitive corporate information. In India, the number of data breaches has surged in recent years. The Indian Computer Emergency Response Team (CERT-IN) reported over 150,000 cyber incidents in 2023 alone, with significant breaches occurring in sectors such as finance, healthcare, and government. ... While there is a global scarcity of competent cybersecurity personnel, India is experiencing an exceptionally severe shortfall. A report conducted by (ISC)² indicates that there is a 3 million cybersecurity workforce shortage worldwide, with India contributing significantly to this shortfall. This deficiency hinders businesses' capacity to detect and address cyber threats that should be looked after by team members' ignorance and lack of training might lead to human mistakes, which are a common way for cyberattacks to get started. ... Compliance with cybersecurity legislation and standards is critical for data protection and retaining confidence. India's legal landscape is changing, with initiatives like the Information Technology Act and the Personal Data Protection Bill aimed at improving cybersecurity. 


Google calls for halting use of WHOIS for TLS domain verifications

TLS certificates are the cryptographic credentials that underpin HTTPS connections, a critical component of online communications verifying that a server belongs to a trusted entity and encrypts all traffic passing between it and an end user. ... The rules for how certificates are issued and the process for verifying the rightful owner of a domain are left to the CA/Browser Forum. One "base requirement rule" allows CAs to send an email to an address listed in the WHOIS record for the domain being applied for. When the receiver clicks an enclosed link, the certificate is automatically approved. ... Specifically, watchTowr researchers were able to receive a verification link for any domain ending in .mobi, including ones they didn’t own. The researchers did this by deploying a fake WHOIS server and populating it with fake records. Creation of the fake server was possible because dotmobiregistry.net—the previous domain hosting the WHOIS server for .mobi domains—was allowed to expire after the server was relocated to a new domain. watchTowr researchers registered the domain, set up the imposter WHOIS server, and found that CAs continued to rely on it to verify ownership of .mobi domains.


How API Security Fits into DORA Compliance: Everything You Need to Know

Financial institutions rely heavily on third-party service providers, and APIs are the gateway through which many of these vendors access core banking systems. This introduces significant risk, as third-party APIs may become the weakest link in the supply chain. DORA places substantial emphasis on managing these risks, as outlined in Article 28, stating that financial entities must ensure that third-party providers “implement and maintain appropriate measures to manage ICT risks" and that institutions must "ensure the quality and integration of ICT services provided by third parties." You need to start simple and to be able to answer two questions: Who are your vendors? What third-party apps do you have connected? One of the biggest challenges here is the concept of shadow APIs—those untracked, unauthorized, or forgotten endpoints that can remain active long after their intended purpose. Shadow APIs expose financial institutions to vulnerabilities, making it difficult to track and control third-party access. DORA’s Article 28 further reinforces the need for financial institutions to "assess third-party ICT service providers’ ability to protect the integrity, security, and confidentiality of data, and to manage risks related to outsourcing."


Dirty code still runs, and that’s not a good thing

Quality code benefits developers by minimizing the time and effort spent on patching and refactoring later. Having confidence that code is clean also enhances collaboration, allowing developers to more easily reuse code from colleagues or AI tools. This not only simplifies their work but also reduces the need for retroactive fixes and helps prevent and lower technical debt. To deliver clean code, it’s important to note that developers should start with the right guardrails, tests, and analysis from the beginning, in the IDE. Pairing unit testing with static analysis can also guarantee quality. The sooner these reviews happen in the development process, the better. ... Developers and businesses can’t afford to perpetuate the cycle of bad code and, consequently, subpar software. Pushing poor-quality code through to development will only reintroduce software that breaks down later, even if it seems to run fine in the interim. To end the cycle, developers must deliver software built on clean code before deploying it. By implementing effective reviews and tests that gatekeep bad code before it becomes a major problem, developers can better equip themselves to deliver software with both functionality and longevity. 


The Perfect Balance: Merging AI and Design Thinking for Innovative Pricing Strategies

This combination of AI’s optimization and Design Thinking’s creative transformation is exactly what modern businesses need to stay competitive. Relying solely on AI to adjust pricing may lead to efficiency gains, but without the innovation brought by Design Thinking, businesses risk missing out on new opportunities to reshape their pricing models and align them more closely with customer needs. Conversely, while Design Thinking can spark innovation, without AI’s precision, companies might struggle to implement their ideas in a way that maximizes profitability. It is by uniting these two approaches that organizations can build pricing strategies that are both efficient and forward-looking. For businesses, pricing is a powerful lever that influences profitability, market position, and customer perception. In today’s competitive landscape, those that fail to leverage both AI and Design Thinking risk falling behind. AI offers the operational benefits of real-time optimization, driving immediate financial returns. Design Thinking provides the creative space to explore new value propositions and pricing structures that can secure long-term customer loyalty. 



Quote for the day:

"A sense of humor is part of the art of leadership, of getting along with people, of getting things done." -- Dwight D. Eisenhower

Daily Tech Digest - August 28, 2024

Improving healthcare fraud prevention and patient trust with digital ID

Digital trust involves the use of secure and transparent technologies to protect patient data while enhancing communication and engagement. For example, digital consent forms and secure messaging platforms allow patients to communicate with their healthcare providers conveniently while ensuring that their data remains protected. Furthermore, integrating digital trust technology into healthcare systems can streamline administrative processes, reduce paperwork, and minimize the chances of errors, according to a blog post by Five Faces. This not only enhances operational efficiency but also improves the overall patient experience by reducing wait times and simplifying access to medical services. ... These smart cards, embedded with secure microchips, store vital patient information and health insurance details, enabling healthcare providers to access accurate and up-to-date information during consultations. The use of chip-based ID cards reduces the risk of identity theft and fraud, as these cards are difficult to duplicate and require secure authentication methods. This technology ensures that only authorized individuals can access patient information, thereby protecting sensitive data from unauthorized access.


A CEO's Take on AI in the Workforce

Those ignoring the AI transformation and not uptraining their skilled staff are not putting themselves in a position to make use of untapped data that can provide insights into other areas of opportunity for their business. Making minimal-to-no investments in emerging technology merely delays the inevitable and puts companies at a disadvantage at the hands of their competitors. Alternatively, being too aggressive with AI can lead to security vulnerabilities or critical talent loss. While AI integration is critical to accelerating business outputs, doing so without moderators, data safeguards, and regulators to keep organizations in line with data governance and compliance is actually exposing companies to security issues. ... AI should not replace people, but rather presents an opportunity to better utilize them. AI can help solve time-management and efficiency issues across organizations, allowing skilled people to focus on creative and strategic roles or projects that drive better business value. The role of AI should focus on automating time-consuming, repetitive, administrative tasks, thereby leaving individuals to be more calculated and intentional with their time.


The promise of open banking: How data sharing is changing financial services

The benefits of open banking are multifaceted. Customers gain greater control over their financial data, allowing them to securely share it with authorized providers. This empowers them to explore a wider range of customized financial products and services, ultimately promoting financial stability and well-being. Additionally, open banking fosters innovation within the industry, as Fintech companies leverage customer-consented data to develop cutting-edge solutions. The Account Aggregator (AA) framework, regulated by the Reserve Bank of India (RBI), is a cornerstone of open banking in India. AAs act as trusted intermediaries, allowing users to consolidate their financial data from various sources, including banks, mutual funds, and insurance companies, into a single platform. ... APIs empower platforms to aggregate FD offerings from a multitude of banks across India. This provides investors with a comprehensive view of available options, allowing them to compare interest rates, tenures, minimum deposit requirements, and other features within a single platform. This transparency empowers informed decision-making, enabling investors to select the FD that best aligns with their risk appetite and financial goals.


What are the realistic prospects for grid-independent AI data centers in the UK?

Already colo companies looking to develop in the UK are evaluating on-site gas engine power generation and CHP (combined heat and power). To date, UK CHP projects have been hampered by a lack of grid capacity. Microgrid developments are viewed as a solution to this. CHP and microgrids should also make data center developments more appealing for local government planning departments. ... Data center developments have hit front-line politics with Rachel Reeves, the new UK Labour government’s Chancellor of the Exchequer (Finance Minister) citing data center infrastructure and reform of planning law as critical to growing the country’s economy. Already some projects that were denied planning permission look likely to be reconsidered with reports that “Deputy Prime Minister Angela Rayner" had “recovered two planning appeals for data centers in Buckinghamshire and Hertfordshire (already)”. It seems clear that to have any realistic chance of meeting data center capacity demand for AI, cloud and other digital services will require on-site power generation in some form or other. 


Why Every IT Leader Needs a Team of Trusted Advisors

When seeking advisors, look for individuals with the time and willingness to join your kitchen cabinet, Kelley says. "Be mindful of their schedules and obligations, since they are doing you a favor," he notes. Additionally, if you're offering any perks, such as paid meals, travel reimbursement, or direct monetary payments, let them know upfront. Such bonuses are relatively rare, however. "More than likely, you’re talking about individual or small group phone calls or meetings." Above all, be honest and open with your team members. "Let them know what kind of help you need and the time frame you are working under," Kelley says. "If you've heard different or contradictory advice from other sources, bring it up and get their reaction," he recommends. Keep in mind that an advisory team is a two-way relationship. Kelley recommends personalizing each connection with an occasional handwritten note, book, lunch, or ticket to a concert or sporting event. On the other hand, if you decide to ignore their input or advice, you need to explain why, he suggests. Otherwise, they might conclude that being a team participant is a waste of time. Also be sure to help your team members whenever they need advice or support. 


Why CI and CD Need to Go Their Separate Ways

Continuous promotion is a concept designed to bridge the gap between CI and CD, addressing the limitations of traditional CI/CD pipelines when used with modern technologies like Kubernetes and GitOps. The idea is to insert an intermediary step that focuses on promotion of artifacts based on predefined rules and conditions. This approach allows more granular control over the deployment process, ensuring that artifacts are promoted only when they meet specific criteria, such as passing certain tests or receiving necessary approvals. By doing so, continuous promotion decouples the CI and CD processes, allowing each to focus on its core responsibilities without overextension. ... Introducing a systematic step between CI and CD ensures that only qualified artifacts progress through the pipeline, reducing the risk of faulty deployments. This approach allows the implementation of detailed rule sets, which can include criteria such as successful test completions, manual approvals or compliance checks. As a result, continuous promotion provides greater control over the deployment process, enabling teams to automate complex decision-making processes that would otherwise require manual intervention.


CIOs listen up: either plan to manage fast-changing certificates, or fade away

Even when organizations finally decide to set policies and standardize security for new deployments, mitigating the existing deployments is a huge effort, and in the modern stack, there’s no dedicated operations team, he says. That makes it more important for CIOs to take ownership of the problem, Cairns points out. “Especially in larger, more complex and global organizations, the magnitude of trying to push these things through the organization is often underestimated,” he says. “Some of that is having a good handle on the culture and how to address these things in terms of messaging, communications, enforcement of the right policies and practices, and making sure you’ve got the proper stakeholder buy-in at the various points in this process — a lot of governance aspects.” ... Many large organizations will soon need to revoke and reprovision TLS certificates at scale. One in five Fortune 1000 companies use Entrust as their certificate authority, and from November 1, 2024, Chrome will follow Firefox in no longer trusting TLS certificates from Entrust because of a pattern of compliance failures, which the CA argues were, ironically, sometimes caused by enterprise customers asking for more time to deal with revocation. 


Effortless Concurrency: Leveraging the Actor Model in Financial Transaction Systems

In a financial transaction system, the data flow for handling inbound payments involves multiple steps and checks to ensure compliance, security, and accuracy. However, potential failure points exist throughout this process, particularly when external systems impose restrictions or when the system must dynamically decide on the course of action based on real-time data. ... Implementing distributed locks is inherently more complex, often requiring external systems like ZooKeeper, Consul, Hazelcast, or Redis to manage the lock state across multiple nodes. These systems need to be highly available and consistent to prevent the distributed lock mechanism from becoming a single point of failure or a bottleneck. ... In this messaging based model, communication between different parts of the system occurs through messages. This approach enables asynchronous communication, decoupling components and enhancing flexibility and scalability. Messages are managed through queues and message brokers, which ensure orderly transmission and reception of messages. ... Ensuring message durability is crucial in financial transaction systems because it allows the system to replay a message if the processor fails to handle the command due to issues like external payment failures, storage failures, or network problems.


Hundreds of LLM Servers Expose Corporate, Health & Other Online Data

Flowise is a low-code tool for building all kinds of LLM applications. It's backed by Y Combinator, and sports tens of thousands of stars on GitHub. Whether it be a customer support bot or a tool for generating and extracting data for downstream programming and other tasks, the programs that developers build with Flowise tend to access and manage large quantities of data. It's no wonder, then, that the majority of Flowise servers are password-protected. ... Leaky vector databases are even more dangerous than leaky LLM builders, as they can be tampered with in such a way that does not alert the users of AI tools that rely on them. For example, instead of just stealing information from an exposed vector database, a hacker can delete or corrupt its data to manipulate its results. One could also plant malware within a vector database such that when an LLM program queries it, it ends up ingesting the malware. ... To mitigate the risk of exposed AI tooling, Deutsch recommends that organizations restrict access to the AI services they rely on, monitor and log the activity associated with those services, protect sensitive data trafficked by LLM apps, and always apply software updates where possible.


Generative AI vs. Traditional AI

Traditional AI, often referred to as “symbolic AI” or “rule-based AI,” emerged in the mid-20th century. It relies on predefined rules and logical reasoning to solve specific problems. These systems operate within a rigid framework of human-defined guidelines and are adept at tasks like data classification, anomaly detection, and decision-making processes based on historical data. In sharp contrast, generative AI is a more recent development that leverages advanced ML techniques to create new content. This form of AI does not follow predefined rules but learns patterns from vast datasets to generate novel outputs such as text, images, music, and even code. ... Traditional AI relies heavily on rule-based systems and predefined models to perform specific tasks. These systems operate within narrowly defined parameters, focusing on pattern recognition, classification, and regression through supervised learning techniques. Data fed into these models is typically structured and labeled, allowing for precise predictions or decisions based on historical patterns. In contrast, generative AI uses neural networks and advanced ML models to produce human-like content. This approach leverages unsupervised or semi-supervised learning techniques to understand underlying data distributions.



Quote for the day:

"Opportunities don't happen. You create them." -- Chris Grosser

Daily Tech Digest - February 20, 2023

How quantum computing threatens internet security

“Basically, the problem with our current security paradigm is that it relies on encrypted information and decryption keys that are sent over a network from sender to receiver. Regardless of the way the messages are encrypted, in theory, someone can intercept and use the keys to decrypt apparently secure messages. Quantum computers simply make this process faster,” Tanaka explains. “If we dispense with this key-sharing idea and instead find a way to use unpredictable random numbers to encrypt information, the system might be immune. [Muons] are capable of generating truly unpredictable numbers.” The proposed system is based on the fact that the speed of arrival of these subatomic particles is always random. This would be the key to encrypt and decrypt the message, if there is a synchronized sender and receiver. In this way, the sending of keys would be avoided, according to the Japanese team. However, muon detection devices are large, complex and power-hungry, limitations that Tanaka believes the technology could ultimately overcome.


Considering Entrepreneurship After a Successful Corporate Career?

Here Are 3 Things You Need to Know.Many of you may be concerned that a transition could alienate your audience and force you to wait before making a move. But this is a common misconception rooted in the idea that your personal brand reflects what you do professionally. At Brand of a Leader, we help our clients shift their thinking by showing them that their personal brand is who they are, not what they do. The goal of personal brand discovery is to understand your essence and package it in a way that appeals to others. Your vocation is only one of your key talking points, and when you pivot, you simply shift those points while maintaining the essence of your brand. So, when should you start building your personal brand? The answer is simple: the sooner, the better. Building a brand takes time — time to build an audience, create visibility and establish associations between your name and consistent perceptions in people's minds. Starting sooner means you'll start seeing results faster.


Establish secure routes and TLS termination with wildcard certificates

By default, the Red Hat OpenShift Container Platform uses the Ingress Operator to create an internal certificate authority (CA) and issue a wildcard certificate valid for applications under the .apps subdomain. The web console and the command-line interface (CLI) use this certificate. You can replace the default wildcard certificate with one issued by a public CA included in the CA bundle provided by the container userspace. This approach allows external clients to connect to applications running under the .apps subdomain securely. You can replace the default ingress certificate for all applications under the .apps subdomain. After replacing the certificate, all applications, including the web console and CLI, will be encrypted using the specified certificate. One clear benefit of using a wildcard certificate is that it minimizes the effort of managing and securing multiple subdomains. However, this convenience comes at the cost of sharing the same private key across all managed subdomains.


Overcoming a cyber “gut punch”: An interview with Jamil Farshchi

Your biggest enemies in a breach are time and perfection. Everyone wants everything done in a split second. And having perfect information to construct perfect solutions and make perfect decisions is impossible. Time and perfection will ultimately crush you. By contrast, your two greatest allies are communication and optionality. Communication is being able to lay out the story of where things are, and to make sure everyone is rowing in the same direction. It’s being able to communicate the current status, and your plans, to regulators—and at the same time being able to reassure your customers and make sure they have confidence that you’re going to be able to navigate to the other side. Optionality is critical, because no one makes perfect decisions in this kind of firefight. Unless you’re comfortable making decisions that might not be right at any given point in time, you’re going to fail. [As a leader,] you need to frame up a program and the decisions you’re making in such a way that you’re comfortable rolling them back or tailoring them as you learn more, and as things progress.


7 reasons to avoid investing in cyber insurance

Two things organizations might want to consider right off the bat when contemplating an insurance policy are the cost to and benefit for the business, SecAlliance Director of Intelligence Mick Reynolds tells CSO. “When looking at cost, the recent spate of ransomware attacks globally has seen massive increases in premiums for firms wishing to include coverage of such events. Renewal quotes have, in some cases, increased from around £100,000 ($120,000) to over £1.5 million ($1.8 million). Such massive increases in premiums, for no perceived increase in coverage, are starting now to be challenged by board risk committees as to the overall value they provide, with some now deciding that accepting exposure to major cyber events such as ransomware is preferable to the cost of the associated policy.” As for benefits to the business, insurance is primarily taken out to cover losses incurred during a major cyber event, and 99% of the time these losses are quantifiable and relate predominantly to response and recovery costs, Reynolds says.


The importance of plugging insurance cyber response gaps

The insurance industry is a lucrative target as organisations hold large amounts of private and sensitive information about their policy holders who, rightfully so, have the expectation of their data being kept safe and secure. This makes it no surprise that the industry is a key target for cyber criminals due to the massive disruption it can cause and the potential high financial reward on offer. Research shows that 82 per cent of the largest insurance carriers were the focus of ransom attacks in 2022. It is expected that the insurance industry will only become a more favourable target, and these types of disruptions will become increasingly severe. The insurance industry is one that has embraced innovation and new forms of technology in its practices over recent years in order to offer their customers a seamless experience. In doing so, alongside the onset of remote working catalysed by the pandemic, they have increased their threat surface. ... These are just the tip of the iceberg, so when cyber criminals look to exploit data, the insurance industry is a primary target due its huge customer base.


Value Chain Analysis: Best Practices for Improvements

To stay competitive, organizations must ensure that they have picked the right partners for each of the functions in the value chain, and that appropriate value is captured by each participant. “In addition to ensuring each participant’s value and usefulness in the chain, value chain analysis enables organizations to periodically verify that functions are still necessary, and that value is being delivered efficiently without undue waste such as administrative burden, communications costs or transit or other ancillary functions,” he says. Business leaders and IT leaders like the chief information officer and chief data officer must prove that they are benefiting the bottom line. While it is time consuming, value chain analysis is a key method to examine company value -- an essential practice during times of high stakes and economic uncertainty. Jon Aniano, senior vice president, Zendesk, adds running a full VCA requires analyzing and tracking a massive amount of data across your entire company.


Cybersecurity takes a leap forward with AI tools and techniques

“An effective AI agent for cybersecurity needs to sense, perceive, act and adapt, based on the information it can gather and on the results of decisions that it enacts,” said Samrat Chatterjee, a data scientist who presented the team’s work. “Deep reinforcement learning holds great potential in this space, where the number of system states and action choices can be large.” DRL, which combines reinforcement learning and deep learning, is especially adept in situations where a series of decisions in a complex environment need to be made. Good decisions leading to desirable results are reinforced with a positive reward (expressed as a numeric value); bad choices leading to undesirable outcomes are discouraged via a negative cost. It’s similar to how people learn many tasks. A child who does their chores might receive positive reinforcement with a desired playdate; a child who doesn’t do their work gets negative reinforcement, like the takeaway of a digital device.


9 ways ChatGPT will help CIOs

“ChatGPT is very powerful out of the box, so it doesn’t require extensive training or teaching to get up to speed and handle specific business processes. A valuable initial business application for ChatGPT should be directed towards routine tasks, such as filling out a contract. It can effectively review the document and answer the necessary fields using the data and context provided by the organization. With that said, ChatGPT has the potential to shoulder administrative burdens for CIOs quickly, but it’s important to regularly measure the accuracy of its work, especially if an organization plans to use it regularly. The best way for CIOs to get started with ChatGPT is to take the time to grasp how it would work within the context of their organization before rushing to widespread adoption. At these early stages of the technology, it’s better to let it complement existing workflows under close supervision instead of restructuring around it as an end-to-end solution. 


Art Of Knowledge Crunching In Domain Driven Design

Miscommunication during knowledge crunching sessions would have different reasons, such as cognitive bias, which is a type of error in reasoning, decision-making, and perception that occurs due to the way our brains perceive and process information. This type of bias occurs when an individual’s cognitive processes lead them to form inaccurate conclusions or make irrational decisions. For example, when betting on a roulette table, if previous outcomes have landed on red, then we might mistakenly assume that the next outcome will be black; however, these events are independent of each other (i.e., the probability of their results do not affect each other). Also, apophenia is the tendency to perceive meaningful connections between unrelated things, such as conspiracy theories or the moment we think we get it but actually, we do not get it. A good example of this could be an image sent from Mars that includes a shape on a rock that you might think is the face of an alien, but it’s just a random shape of a rock.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - December 19, 2021

Data Science Collides with Traditional Math in the Golden State

San Francisco’s approach is the model for a new math framework proposed by the California Department of Education that has been adopted for K-12 education statewide. Like the San Francisco model, the state framework seeks to alter the traditional pathway that has guided college-bound students for generations, including by encouraging middle schools to drop Algebra (the decision to implement the recommendations is made by individual school districts). This new framework has been received with some controversy. Yesterday, a group of university professors wrote an open letter on K-12 mathematics, which specifically cites the new California Mathematics Framework. “We fully agree that mathematics education ‘should not be a gatekeeper but a launchpad,’” the professors write. “However, we are deeply concerned about the unintended consequences of recent well-intentioned approaches to reform mathematics, particularly the California Mathematics Framework.” Frameworks like the CMF aim to “reduce achievement gaps by limiting the availability of advanced mathematical courses to middle schoolers and beginning high schoolers,” the professors continued.


Promoting trust in data through multistakeholder data governance

A lack of transparency and openness of the proceedings, or barriers to participation, such as prohibitive membership fees, will impede participation and reduce trust in the process. These challenges are particularly felt by participants from low- and middle-income countries (LICs and LMICs), whose financial resources and technical capacity are usually not on par with those of higher-income countries. These challenges affect both the participatory nature of the process itself and the inclusiveness and quality of the outcome. Even where a level playing field exists, the effectiveness of the process can be limited if decision makers do not incorporate input from other stakeholders. Notwithstanding the challenges, multistakeholder data governance is an essential component of the “trust framework” that strengthens the social contract for data. In practice, this will require supporting the development of diverse forums—formal or informal, digital or analog—to foster engagement on key data governance policies, rules, and standards, and the allocation of funds and technical assistance by governments and nongovernmental actors to support the effective participation of LMICs and underrepresented groups.


A Plan for Developing a Working Data Strategy Scorecard

Strategy is an evolving process, with regular adjustments expected as progress is measured against desired goals over longer timeframes. “There’s always an element of uncertainty about the future,” Levy said, “so strategy is more about a set of options or strategic choices, rather than a fixed plan.” It’s common for companies to re-evaluate and adjust accordingly as business goals evolve and systems or tools change. Before building a strategy, people often assume that they must have vision statements or mission statements, a SWOT analysis, or goals and objectives. These are good to have, he said, but in most instances, they are only available after the strategy analysis is completed. “When people establish their Data Strategies, it’s typically to address limitations they have and the goals that they want. Your strategy, once established, should be able to answer these questions.” But again, Levy said, it’s after the strategy is developed, not prior. Although it can be difficult to understand the purpose of a Data Strategy, he said, it’s critically important to clearly identify goals and know how to communicate them to the intended audience.


“Less popular” JavaScript Design Patterns.

As software engineers, we strive to write maintainable, reusable, and eloquent code that might live forever in large applications. The code we create must solve real problems. We are certainly not trying to create redundant, unnecessary, or “just for fun” code. At the same time, we frequently face problems that already have well-known solutions that have been defined and discussed by the Global community or even by our own teams millions of times. Those solutions to such problems are called “Design patterns”. There are a number of existing design patterns in software design, some of them are used more often, some of them less frequently. Examples of popular JavaScript design patterns include factory, singleton, strategy, decorator, and observer patterns. In this article, we’re not going to cover all of the design patterns in JavaScript. Instead, let’s consider some of the less well-known but potentially useful JS patterns such as command, builder, and special case, as well as real examples from our production experience.


Software Engineering | Coupling and Cohesion

The purpose of Design phase in the Software Development Life Cycle is to produce a solution to a problem given in the SRS(Software Requirement Specification) document. The output of the design phase is Software Design Document (SDD). Basically, design is a two-part iterative process. First part is Conceptual Design that tells the customer what the system will do. Second is Technical Design that allows the system builders to understand the actual hardware and software needed to solve customer’s problem. ... If the dependency between the modules is based on the fact that they communicate by passing only data, then the modules are said to be data coupled. In data coupling, the components are independent of each other and communicate through data. Module communications don’t contain tramp data. Example-customer billing system. In stamp coupling, the complete data structure is passed from one module to another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors- this choice was made by the insightful designer, not a lazy programmer.


5 Takeaways from SmartBear’s State of Software Quality Report

As API adoption and growth continues, standardization (52%) continues to rank as the top challenge organizations hope to solve soon as they look to scale. Without standardization, APIs become bespoke and developer productivity declines. Costs and time-to-market increase to accommodate changes, the general quality of the consumer experience wanes, and it leads to a lower value proposition and decreased reach. Additionally, the consumer persona in the API landscape is rightfully getting more attention. Consumer expectations have never been higher. API consumers demand standardized offerings from providers and will look elsewhere if expectations around developer experience isn’t met, which is especially true in financial services. Security (40%) has thankfully crept up in the rankings to number two this year. APIs increasingly connect our most sensitive data, so ensuring your APIs are secure before, during, and after production is imperative. Applying thoughtful standardization and governance guiderails are required for teams to deliver good quality and secure APIs consistently.


From DeFi year to decade: Is mass adoption here? Experts Answer, Part 1

More scaling solutions will become essential to the mass adoption of DeFi products and services. We are seeing that most DeFi applications go live on multiple chains. While that makes them cheaper to use, it adds more complexities for those who are trying to learn and understand how they work. Thus, to start the second phase of DeFi mass adoption, we need solutions that simplify onboarding and use DApps that are spread across different chains and scaling solutions. The endgame is that all the cross-chain actions will be in the background, handled by infra services such as Biconomy or the DApp themselves, so the user doesn’t need to deal with it themselves. ... Going into 2022 and equipped with the right layer-one networks, we’re aiming for mass adoption. To achieve that, we need to eradicate the entry barriers for buying and selling crypto through regulated fiat bridges (such as banks), overhaul the user experience, reduce fees, and provide the right guide rails so everyone can easily and safely participate in the decentralized economy. DeFi is legitimizing crypto and decentralized economies. Traditional financial institutions are already starting to participate. In 2022, we will only see an uptick in usage and adoption.


Serious Security: OpenSSL fixes “error conflation” bugs – how mixing up mistakes can lead to trouble

The good news is that the OpenSSL 1.1.1m release notes don’t list any CVE-numbered bugs, suggesting that although this update is both desirable and important, you probably don’t need to consider it critical just yet. But those of you who have already moved forwards to OpenSSL 3 – and, like your tax return, it’s ultimately inevitable, and somehow a lot easier if you start sooner – should note that OpenSSL 3.0.1 patches a security risk dubbed CVE-2021-4044. ... In theory, a precisely written application ought not to be dangerously vulnerable to this bug, which is caused by what we referred to in the headline as error conflation, which is really just a fancy way of saying, “We gave you the wrong result.” Simply put, some internal errors in OpenSSL – a genuine but unlikely error, for example, such as running out of memory, or a flaw elsewhere in OpenSSL that provokes an error where there wasn’t one – don’t get reported correctly. Instead of percolating back to your application precisely, these errors get “remapped” as they are passed back up the call chain in OpenSSL, where they ultimately show up as a completely different sort of error.


Digital Asset Management – what is it, and why does my organisation need it?

DAM technology is more than a repository, of course. Picture it as a framework that holds a company’s assets, on top of which sits a powerful AI engine capable of learning the connections between disparate data sets and presenting them to users in ways that make the data more useful and functional. Advanced DAM platforms can scale up to storing more than ten billion objects – all of which become tangible assets, connected by the in-built AI -- at the same time. This has the capacity to result in a huge rise in efficiency around the use of assets and objects. Take, for example, a busy modern media marketing agency. In the digital world, they are faced with a massive expansion of content at the same time as release windows are shrinking – coupled with the issue of increasingly complex content creation and delivery ecosystems. A DAM platform can manage those huge volumes of assets - each with their complex metadata - at speeds and scale that would simply break a legacy system. Another compelling example of DAM in action includes a large U.S.-based film and TV company, which uses it for licencing management.


Impact of Data Quality on Big Data Management

A starting point for measuring Data Quality can be the qualities of big data—volume, velocity, variety, veracity—supplemented with a fifth criterion of value, made up the baseline performance benchmarks. Interestingly, these baseline benchmarks actually contribute to the complexity of big data: variety such as structured, unstructured, or semi-structured increases the possibility of poor data; data channels such as streaming devices with high-volume and high-velocity data enhances the chances of corrupt data—and thus no single quality metric can work on such voluminous and multi-type data. The easy availability of data today is both a boon and a barrier to Enterprise Data Management. On one hand, big data promises advanced analytics with actionable outcomes; on the other hand, data integrity and security are seriously threatened. The Data Quality program is an important step in implementing a practical DG framework as this single factor controls the outcomes of business analytics and decision-making. ... Another primary challenge that big data brings to Data Quality Management is ensuring data accuracy, without which, insights would be inaccurate. 



Quote for the day:

"There is no "one" way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer

Daily Tech Digest - April 30, 2021

Tech to the aid of justice delivery

Obsolete statutes which trigger unnecessary litigation need to be eliminated as they are being done currently with over 1,500 statutes being removed in the last few years. Furthermore, for any new legislation, a sunset review clause should be made a mandatory intervention, such that after every few years, it is reviewed for its relevance in the society. A corollary to this is scaling decriminalisation of minor offences after determining as shown by Kadish SH in his seminal paper ‘The Crisis of Overcriminalization’, whether the total public and private costs of criminalisation outweigh the benefits? Non-compliance with certain legal provisions which don’t involve mala fide intent can be addressed through monetary compensation rather than prison time, which inevitably instigates litigation. Finally, among the plethora of ongoing litigations in the Indian court system, a substantial number are those that don’t require interpretation of the law by a judge, but simply adjudication on facts. These can take the route of ODR, which has the potential for dispute avoidance by promoting legal education and inducing informed choices for initiating litigation and also containment by making use of mediation, conciliation or arbitration, and resolving disputes outside the court system.


Leading future-ready organizationsTo break through these barriers to Agile, companies need a restart. 

They need to continue to expand on the initial progress they’ve made but focus on implementing a wider, more holistic approach to Agile. Every aspect of the organization must be engaged in an ongoing cyclical process of “discover and evaluate, prioritize, build and operate, analyze…and repeat.” ... Organizations that leverage digital decoupling are able to get on independent release cycles and unlock new ways of working with legacy systems. Based on our work with clients, we’ve seen that this can result in up to 30% reduction in cost of change, reduced coordination overhead, and increased speed of planning and pace of delivery. ... In our work with clients, we see firsthand how cross-functional teams and automation of application delivery and operations contributes to increased pace of delivery, improved employee productivity, and up to 30% reduction in deployment time. Additionally, scaling DevOps enables fast and reliable releases of new features to production within short iterations and includes optimizing processes and upskilling people, which is the starting point for a collaborative and liquid enterprise. .... Moving talent and partners into a non-hierarchal and blended talent sourcing and management model can result in 10-20% increase in capacity.


F5 Big-IP Vulnerable to Security-Bypass Bug

The vulnerability specifically exists in one of the core software components of the appliance: The Access Policy Manager (APM). It manages and enforces access policies, i.e., making sure all users are authenticated and authorized to use a given application. Silverfort researchers noted that APM is sometimes used to protect access to the Big-IP admin console too. APM implements Kerberos as an authentication protocol for authentication required by an APM policy, they explained. “When a user accesses an application through Big-IP, they may be presented with a captive portal and required to enter a username and password,” researchers said, in a blog posting issued on Thursday. “The username and password are verified against Active Directory with the Kerberos protocol to ensure the user is who they claim they are.” During this process, the user essentially authenticates to the server, which in turn authenticates to the client. To work properly, KDC must also authenticate to the server. KDC is a network service that supplies session tickets and temporary session keys to users and computers within an Active Directory domain.


4 Business Benefits of an Event-Driven Architecture (EDA)

Using an event-driven architecture can significantly improve developmental efficiency in terms of both speed and cost. This is because all events are passed through a central event bus, which new services can easily connect with. Not only can services listen for specific events, triggering new code where appropriate, but they can also push events of their own to the event bus, indirectly connecting to existing services. ... If you want to increase the retention and lifetime value of customers, improving your application’s user experience is a must. An event-driven architecture can be incredibly beneficial to user experience (albeit indirectly) since it encourages you to think about and build around… events! ... Using an event-driven architecture can also reduce the running costs of your application. Since events are pushed to services as they happen, there’s no need for services to poll each other for state changes continuously. This leads to significantly fewer calls being made, which reduces bandwidth consumption and CPU usage, ultimately translating to lower operating costs. Additionally, those using a third-party API gateway or proxy will pay less if they are billed per-call.


Gartner says low-code, RPA, and AI driving growth in ‘hyperautomation’

Gartner said process-agnostic tools such as RPA, LCAP, and AI will drive the hyperautomation trend because organizations can use them across multiple use cases. Even though they constitute a small part of the overall market, their impact will be significant, with Gartner projecting 54% growth in these process-agnostic tools. Through 2024, the drive toward hyperautomation will lead organizations to adopt at least three out of the 20 process-agonistic types of software that enable hyperautomation, Gartner said. The demand for low-code tools is already high as skills-strapped IT organizations look for ways to move simple development projects over to business users. Last year, Gartner forecast that three-quarters of large enterprises would use at least four low-code development tools by 2024 and that low-code would make up more than 65% of application development activity. Software automating specific tasks, such as enterprise resource planning (ERP), supply chain management, and customer relationship management (CRM), will also contribute to the market’s growth, Gartner said.


When cryptography attacks – how TLS helps malware hide in plain sight

Lots of things that we rely on, and that are generally regarded as bringing value, convenience and benefit to our lives…can be used for harm as well as good. Even the proverbial double-edged sword, which theoretically gave ancient warriors twice as much fighting power by having twice as much attack surface, turned out to be, well, a double-edged sword. With no “safe edge” at the rear, a double-edged sword that was mishandled, or driven back by an assailant’s counter-attack, became a direct threat to the person wielding it instead of to their opponent. ... The crooks have fallen in love with TLS as well. By using TLS to conceal their malware machinations inside an encrypted layer, cybercriminals can make it harder for us to figure out what they’re up to. That’s because one stream of encrypted data looks much the same as any other. Given a file that contains properly-encrypted data, you have no way of telling whether the original input was the complete text of the Holy Bible, or the compiled code of the world’s most dangerous ransomware. After they’re encrypted, you simply can’t tell them apart – indeed, a well-designed encryption algorithm should convert any input plaintext into an output ciphertext that is indistinguishable from the sort of data you get by repeatedly rolling a die.


Decoupling Software-Hardware Dependency In Deep Learning

Working with distributed systems, data processing such as Apache Spark, Distributed TensorFlow or TensorFlowOnSpark, adds complexity. The cost of associated hardware and software go up too. Traditional software engineering typically assumes that hardware is at best a non-issue and at worst a static entity. In the context of machine learning, hardware performance directly translates to reduced training time. So, there is a great incentive for the software to follow the hardware development in lockstep. Deep learning often scales directly with model size and data amount. As training times can be very long, there is a powerful motivation to maximise performance using the latest software and hardware. Changing the hardware and software may cause issues in maintaining reproducible results and run up significant engineering costs while keeping software and hardware up to date. Building production-ready systems with deep learning components pose many challenges, especially if the company does not have a large research group and a highly developed supporting infrastructure. However, recently, a new breed of startups have surfaced to address the software-hardware disconnect.


4 tips for launching a successful data strategy

Your business partners know that data can be powerful, and they know that they want it, but they do not always know, specifically, what data they need and how to use it. The IT organization knows how to collect, structure, secure, and serve up the data, but they are not typically responsible for defining how best to leverage the data. This gap between serving up the data and using the data can be as wide as the Ancient Mariner’s ocean (sorry), over which the CIO needs to build a bridge. ... But how do we attract those brilliant data scientists who can build the data dashboard straw man? To counter the challenge of a really tight market for these rare birds, Nick Daffan, CIO of Verisk Analytics, suggests giving data scientists what we all want: interesting work that creates an impact. “Data scientists want to get their hands on data that has both depth and breadth, and they want to work with the most advanced tools and methods," Daffan says. "They also want to see their models implemented, which means being able to help their business partners and customers use the data in a productive way.”


How to boost internal cyber security training

A big part of maintaining engagement among staff when it comes to cyber security is explaining how the consequences of insufficient protection could affect employees in particular. “Unless individuals feel personally invested, they tend not to concern themselves with the impact of a breach,” said James Spiteri, principal security specialist at Elastic. “Provide training that moves beyond theory and shows the risks and implications through actual practice to help engage the individual. For example, simulating an attack to show how an insecure password or bad security hygiene on personal accounts can lead to unwanted access of people’s personal information such as photos or payment details could be very effective in changing behaviours. “Teams need to find relatable tools to help break down the complexities of cyber security. Showcasing cyber security problems through relatable items like phones, and everyday situations such as connecting to public Wi-fi, can help spread awareness of employees’ digital footprint and how easy it is to spread information without being aware of it.”


Shedding light on the threat posed by shadow admins

Threat actors seek shadow admin accounts because of their privilege and the stealthiness they can bestow upon attackers. These accounts are not part of a group of privileged users, meaning their activities can go unnoticed. If an account is part of an Active Directory (AD) group, AD admins can monitor it, and unusual behaviour is therefore relatively straightforward to pinpoint. However, shadow admins are not members of a group since they gain a particular privilege by a direct assignment. If a threat actor seizes control of one of these accounts, they immediately have a degree of privileged access. This access allows them to advance their attack subtly and craftily seek further privileges and permissions while escaping defender scrutiny. Leaving shadow admin accounts on an organization’s AD is a considerable risk that’s best compared to handing over the keys to one’s kingdom to do a particular task and then forgetting to track who has the keys and when to ask for it back. It pays to know who exactly has privileged access, which is where AD admin groups help. Conversely, the presence of shadow admin accounts could be a sign that an attack is underway.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor