Showing posts with label use case. Show all posts
Showing posts with label use case. Show all posts

Daily Tech Digest - December 21, 2025


Quote for the day:

"Don't worry about being successful but work toward being significant and the success will naturally follow." -- Oprah Winfrey



Is it Possible to Fight AI and Win?

What’s the most important thing security teams need to figure out? Organizations must stop talking about AI like it’s a death star of sorts. AI is not a single, all-powerful, monolithic entity. It’s a stack of threats, behaviors, and operational surfaces and each one has its own kill chain, controls, and business consequences. We need to break AI down into its parts and conduct a real campaign to defend ourselves. ... If AI is going to be operationalized inside your business, it should be treated like a business function. Not a feature or experiment, but a real operating capability. When you look at it that way, the approach becomes clearer because businesses already know how to do this. There is always an equivalent of HR, finance, engineering, marketing, and operations. AI has the same needs. ... Quick fixes aren’t enough in the AI era. The bad actors are innovating at machine speed, so humans must respond at machine speed with appropriate human direction and ethical clarity. AI is a tool. And the side that uses it better will win. If that isn’t enough, AI will force another reality that organizations need to prepare for. Security and compliance will become an on-demand model. Customers will not wait for annual reports or scheduled reviews. They will click into a dashboard and see your posture in real time. Your controls, your gaps, and your response discipline will be visible when it matters, not when it is convenient.


Cybersecurity Budgets are Going Up, Pointing to a Boom

Nearly all of the security leaders (99%) in the 2025 KPMG Cybersecurity Survey plan on upping their cybersecurity budgets in the two-to-three years to come, in preparation for what may be the upcoming boom in cybersecurity. More than half (54%) say budget increases will fall between 6%-10%. “The data doesn’t just point to steady growth; it signals a potential boom. We’re seeing a major market pivot where cybersecurity is now a fundamental driver of business strategy,” Michael Isensee, Cybersecurity & Tech Risk Leader, KPMG LLP, said in a release. “Leaders are moving beyond reactive defense and are actively investing to build a security posture that can withstand future shocks, especially from AI and other emerging technologies. This isn’t just about spending more; it’s about strategic investment in resilience.” ... The security leaders recognize AI is amassing steam as a dual catalyst—38% are challenged by AI-powered attacks in the coming three years, with 70% of organizations currently committing 10% of their budgets to combating such attacks. But they also say AI is their best weapon to proactively identify and stop threats when it comes to fraud prevention (57%), predictive analytics (56%) and enhanced detection (53%). But they need the talent to pull it off. And as the boom takes off, 53% just don’t have enough qualified candidates. As a result, 49% are increasing compensation and the same number are bolstering internal training, while 25% are increasingly turning to third parties like MSSPs to fill the skills gap.



How Neuro-Symbolic AI Breaks the Limits of LLMs

While AI transforms subjective work like content creation and data summarization, executives rightfully hesitate to use it when facing objective, high-stakes determinations that have clear right and wrong answers, such as contract interpretation, regulatory compliance, or logical workflow validation. But what if AI could demonstrate its reasoning and provide mathematical proof of its conclusions? That’s where neuro-symbolic AI offers a way forward. The “neuro” refers to neural networks, the technology behind today’s LLMs, which learn patterns from massive datasets. A practical example could be a compliance system, where a neural model trained on thousands of past cases might infer that a certain policy doesn’t apply in a scenario. On the other hand, symbolic AI represents knowledge through rules, constraints, and structure, and it applies logic to make deductions. ... Neuro-symbolic AI introduces a structural advance in LLM training by embedding automated reasoning directly into the training loop. This uses formal logic and mathematical proof to mechanically verify whether a statement, program, or output used in the training data is correct. A tool such as Lean,4 is precise, deterministic, and gives provable assurance. The key advantage of automated reasoning is that it verifies each step of the reasoning process, and not just the final answer. 


Three things they’re not telling you about mobile app security

With the realities of “wilderness survival” in mind, effective mobile app security must be designed for specific environmental exposures. You may need to wear some kind of jacket at your office job (web app), but you’ll need a very different kind of purpose-built jacket as well as other clothing layers, tools, and safety checks to climb Mount Everest (mobile app). Similarly, mobile app development teams need to rigorously test their code for potential security issues and also incorporate multi-layered protections designed for some harsh realities. ... A proactive and comprehensive approach is one that applies mobile application security at each stage of the software development lifecycle (SDLC). It includes the aforementioned testing in the stages of planning, design, and development as well as those multi-layered protections to ensure application integrity post-release. ... Whether stemming from overconfidence or just kicking the can down the road, inadequate mobile app security presents an existential risk. A recent survey of developers and security professionals found that organizations experienced an average of nine mobile app security incidents over the previous year. The total calculated cost of each incident isn’t just about downtime and raw dollars, but also “little things” like user experience, customer retention, and your reputation.


Cybersecurity in 2026: Fewer dashboards, sharper decisions, real accountability

The way organisations perceive risk is one of the most important changes predicted in 2026. Security teams spent years concentrating on inventory, which included tracking vulnerabilities, chasing scores and counting assets. The model is beginning to disintegrate. Attack-path modelling, on the other hand, is becoming far more useful and practical. These models are evolving from static diagrams to real-world settings where teams may simulate real attacks. Consider it a cyberwar simulation where defenders may test “what if” scenarios in real time, comprehend how a threat might propagate via systems and determine whether vulnerabilities truly cause harm to organisations. This evolution is accompanied by a growing disenchantment with abstract frameworks that failed to provide concrete outcomes. The emphasis is shifting to risk-prioritized operations, where teams start tackling the few problems that actually provide attackers access instead than responding to clutter. Success in 2026 will be determined more by impact than by activities. ... Many companies continue to handle security issues behind closed doors as PR disasters. However, an alternative strategy is gaining momentum. Communicate as soon as something goes wrong. Update frequently, share your knowledge and acknowledge your shortcomings. Post signs of compromise. Allow partners and clients to defend themselves. Particularly in the middle of disorder, this seems dangerous. 


AI and Latency: Why Milliseconds Decide Winners and Losers in the Data Center Race

Many traditional workloads can tolerate latency. Batch processing doesn’t care if it takes an extra second to move data. AI training, especially at hyperscale, can also be forgiving. You can load up terabytes of data in a data center in Idaho and process it for days without caring if it’s a few milliseconds slower. Inference is a different beast. Inference is where AI turns trained models into real-time answers. It’s what happens when ChatGPT finishes your sentence, your banking AI flags a fraudulent transaction, or a predictive maintenance system decides whether to shut down a turbine. ... If you think latency is just a technical metric, you’re missing the bigger picture. In AI-powered industries, shaving milliseconds off inference times directly impacts conversion rates, customer retention, and operational safety. A stock trading platform with 10 ms faster AI-driven trade execution has a measurable financial advantage. A translation service that responds instantly feels more natural and wins user loyalty. A factory that catches a machine fault 200 ms earlier can prevent costly downtime. Latency isn’t a checkbox, it’s a competitive differentiator. And customers are willing to pay for it. That’s why AWS and others have “latency-optimized” SKUs. That’s why every major hyperscaler is pushing inference nodes closer to urban centers.


Why developers need to sharpen their focus on documentation

“One of the bigger benefits of architectural documentation is how it functions as an onboarding resource for developers,” Kalinowski told ITPro. “It’s much easier for new joiners to grasp the system’s architecture and design principles, which means the burden’s not entirely on senior team members’ shoulders to do the training," he added. “It also acts as a repository of institutional knowledge that preserves decision rationale, which might otherwise get lost when team members move to other projects or leave the company." ... “Every day, developers lose time because of inefficiencies in their organization – they get bogged down in repetitive tasks and waste time navigating between different tools,” he said. “They also end up losing time trying to locate pertinent information – like that one piece of documentation that explains an architectural decision from a previous team member,” Peters added. “If software development were an F1 race, these inefficiencies are the pit stops that eat into lap time. Every unnecessary context switch or repetitive task equals more time lost when trying to reach the finish line.” ... “Documentation and deployments appear to either be not routine enough to warrant AI assistance or otherwise removed from existing workflows so that not much time is spent on it,” the company said. ... For developers of all experience levels, Stack Overflow highlighted a concerning divide in terms of documentation activities.


AI Pilots Are Easy. Business Use Cases Are Hard

Moving from pilot to purpose is where most AI journeys lose momentum. The gap often lies not in the model itself, but in the ecosystem around it. Fragmented data, unclear ROI frameworks and organizational silos slow down scaling. To avoid this breakdown, an AI pilot must be anchored to clear business outcomes - whether that's cost optimization, data-led infrastructure or customer experience. Once the outcomes are defined, the organization can test the system with the specific data and processes that will support it. This focus sets the stage for the next 10 to 14 months of refinement needed to ready the tool for deeper integration. When implementation begins, workflows become self-optimizing, decisions accelerate and frontline teams gain real-time intelligence. As AI moves beyond pilots, systems begin spotting patterns before people do. Teams shift from retrospective analysis to live decision-making. Processes improve themselves through constant feedback loops. These capabilities unlock efficiency and insight across businesses, but highly regulated industries such as banking, insurance, and healthcare face additional hurdles. Compliance, data privacy and explainability add layers of complexity, making it essential for AI integration to include process redesign, staff retraining and organizationwide AI literacy, not just within technical teams.


Why your next cloud bill could be a trap

 “AI-ready” often means “AI–deeply embedded” into your data, tools, and runtime environment. Your logs are now processed through their AI analytics. Your application telemetry routes through their AI-based observability. Your customer data is indexed for their vector search. This is convenient in the short term. In the long term, it shifts power. The more AI-native services you consume from a single hyperscaler, the more they shape your architecture and your economics. You become less likely to adopt open source models, alternative GPU clouds, or sovereign and private clouds that might be a better fit for specific workloads. You are more likely to accept rate changes, technical limits, and road maps that may not align with your interests, simply because unwinding that dependency is too painful. ... For companies not prepared to fully commit to AI-native services from a single hyperscaler or in search of a backup option, these alternatives matter. They can host models under your control, support open ecosystems, or serve as a landing zone for workloads you might eventually relocate from a hyperscaler. However, maintaining this flexibility requires avoiding the strong influence of deeply integrated, proprietary AI stacks from the start. ... The bottom line is simple: AI-native cloud is coming, and in many ways, it’s already here. The question is not whether you will use AI in the cloud, but how much control you will retain over its cost, architecture, and strategic direction. 


IT and Security: Aligning to Unlock Greater Value

While many organisations have made strides in aligning IT and security, communication breakdowns can remain a challenge. Historically, friction between these two departments was driven by a lack of communication and competing priorities. For the CISO or head of the security team, reducing the company’s attack surface, limiting access privileges, or banning apps that might open their organisation up to unnecessary, additional risks are likely to be core focus areas. ... The good news is, there are more opportunities now than ever before for IT and security operations to naturally converge – in endpoint management, patch deployment, identity and access management, you name it. It can help to clearly document IT and security’s roles and responsibilities and practice scenarios with tabletop exercises to get everyone on the same page and identify coverage gaps. ... In addition to building versatile teams, organisations should focus on consolidating IT and security toolkits by prioritising solutions that expedite time to value and boost visibility. We’ve said this in security for a long time: you can’t protect (or defend against) what you can’t see. With shared visibility through integrated platforms and consolidated toolkits, both IT and security teams can gain real-time insights into infrastructure, threats, vulnerabilities, and risks before they can impact business. Solutions that help IT and security teams rapidly exchange critical information, accelerate response to incidents, and document the triaging process will make it easier to address similar instances in the future.

Daily Tech Digest - February 08, 2022

Top 10 Anticipated Web 3.0 Trends For 2022

Web 3.0 will ensure that peer-to-peer regulations are properly known and learned through blockchain. This is to bring cryptography and consensus algorithms together to measure the decentralization methods and to be an alternative for the currently used standard databases. Decentralization ensures the sole ownership of the user’s data. It would mean that only the said user will have access to whatever data is being uploaded, altered, saved and utilized. No third person is involved (the government, for example), neither can anyone dictate as to when and how to use data. ... Social media is developing its platforms on the decentralized technology of Web 3.0. This would mean that the centralized features will no longer (or partly) be available on social media platforms in the near future. Blockchain ledgers will be used to construct the new social media industry. Web 3.0 has solved problems such as privacy breaching, mismanaged data, and unauthentic and irrelevant information that have been part of the previous generation of the internet. It offers a safe and secure place for users to participate. Decentralization ensures protection and security to every piece of data added to the internet.


DAOs are meant to be completely autonomous and decentralized, but are they?

If DAOs are to remain true to their nature where the community is able to make decisions equally, decentralization needs to happen in stages. However, providing a certain level of control is required so that common prosperity is maintained among the organization. While involved communities should be given the power to make proposals and decisions, gatekeepers or councils may be required that can effectively maintain the core values of the company. Most successful DAOs including Uniswap, MakerDAO, PieDAO, Decred and more have different systems of gatekeeping where proposals go through various stages before being accepted. For example, Uniswap’s governance protocol has multiple stages of execution before any proposal is accepted. Its last stage is a group of elected users that have the power to halt the implementation of any proposals it deems malicious or unnecessary. On the other hand, MakerDAO has a more open community where people don’t need to hold their token to participate in off-chain voting. Yet, its proposals undergo strict scrutiny.


Database Management Trends in 2022

Augmented Data Management uses machine learning and artificial intelligence to automate Data Management tasks, such as spotting anomalies within large amounts of data and resolving Data Quality issues. The AI models are specifically designed to perform Data Management tasks, taking less time and making fewer errors. Todd Ramlin, a manager of Cable Compare, in describing the benefits of augmented Data Management, said, “Historically, data scientists and engineers have spent the majority of their time manually accessing, preparing, and managing data, but Augmented Data Management is changing that. ADM uses artificial intelligence and machine learning to automate manual tasks in Data Management. It simplifies, optimizes, and automates operations in Data Quality, Metadata Management, Master Data Management, and Database Management systems. AI/ML can offer smart recommendations based on pre-learned models of solutions to specific data tasks. The automation of manual tasks will lead to increased productivity and better data outcomes.”


How open source is shaping data storage management

While open source data storage software is cost-effective, there is a big difference between downloading a project for free and trying it out in a developer machine versus using it to power mission-critical applications that have stringent requirements such as stability, high availability and security. Ghariwala notes that enterprises will need strong technical resources to architect a solution that supports their mission-critical application requirements as well as dedicated resources to triage production issues. ... The second challenge that enterprises may face is related to flexibility which is not guaranteed when using open source technologies. Ghariwala says the problem generally arises when vendors only support their own technologies with their commercial open source solutions, creating lock-in and limiting an organisation’s ability to choose the right solution for their needs. Danny Elmarji, vice-president for presales at Dell Technologies in Asia-Pacific and Japan, notes that some Dell customers are starting to define and use their own software storage that runs on Dell’s hardware and compute, leveraging open-source contributions.


What Is Object Storage?

The database retains a unique identifier for each object. The 64-bit Object ID (OID) indicates the location of the object on a single storage medium or among a cluster of storage devices. Unlike block storage, which allocates storage in predefined blocks of equal length, the lengths of objects can vary. As noted, the relatively simple system of keeping track of objects makes it possible to extend a single object storage system across multiple storage resources. A file storage system, on the other hand, has a defined limit on the number of files it can manage. While some NAS file systems may be quite large, they generally can’t expand to the degree that object storage can. Another distinguishing characteristic of object storage is the way it handles metadata related to each stored object. A file system -- like the Windows file directory on a PC or a shared NAS system -- includes some basic metadata related to each file it manages, such as file name, file size, date created, date modified and possibly the application it’s associated with.


What Architects Need to Know

Dealing with Business Concepts – while this one should be a no brainer, it is met with open scorn in many places, business skills are reserved only for the highest level architects. These concepts include Business Models, Customer Journeys with Personas, Capabilities with Objectives, Value Methods, Investment Planning with some Roadmapping. ...  Technology Design and Delivery – this is a deep and interesting dialog in industry, how much business AND how much technology? If a product owner wants to become an architect, what technology should they learn? How deep do they go? At a minimum, Design including Patterns, the primary Requirements/Decisions/Quality Attributes relationships, Architecture Analysis, Deliverables, Products/Projects, Services, and Quality Assurance. ... Dealing with Stakeholders – often overlooked, always under-trained, and never enough time or techniques, dealing with stakeholders is the hardest part of the job. Humans are mercurial, the lines of decision traceability and influence are blurred, it is effectively chaos in the lifecycle management of companies with lots of petty power plays and even more in terms of financing and final outcomes.


BigQuery vs Snowflake: The Definitive Guide

Snowflake offers an auto-scaling and auto suspend feature that enables clusters to stop or start during either busy or idle periods. With Snowflake your users cannot resize nodes, but they can resize clusters in a single click. Additionally, Snowflake enables you to auto-scale up to 10 warehouses with a limit of 20 DML per queue in a single table. On a similar note, BigQuery automatically provisions your additional compute resources as needed and takes care of everything behind the scenes. ... Both platforms let you scale up and down automatically based on demand. Additionally, Snowflake gives you the ability to isolate workloads across businesses in different warehouses so that different teams can operate independently with no concurrency issues. ... Snowflake automatically provides encryption for data at rest. However, it does not provide granular permissions for columns, but it does provide permissions for schemas, tables, views, procedures, and other objects. Conversely, BigQuery provides security at a column-level as well as permissions on datasets, individual tables, views, and table access controls.


4 metaverse tools that tackle workplace collaboration

By now, most of us have come to realize that the next normal won’t look much like it used to. The pandemic has taught us that turbulent and unpredictable times require flexibility and an open mind.Meanwhile, technology companies have been delivering highly competitive technologies to win both mind and market share. ... Facebook is so committed to the metaverse that it even changed the company’s name to Meta. Meta is also looking at ways to bring the metaverse to the workplace: Its Horizon Workrooms enables users to wear a virtual reality (VR) headset to feel like they’re attending an in-office meeting. Meanwhile, Microsoft is also working on bringing the metaverse to work. In 2022, Microsoft Teams users will be able to replace their video streams with 3D avatars of themselves. On the plus side, this lets people maintain a physical presence even when they’re not feeling particularly camera-ready. But at the same time, replacing ourselves with idealized avatar caricatures may further exacerbate the mental health impact of seeing our natural faces ‒ and all of our flaws ‒ filtered away. 


Five blockchain use cases: from property to sustainability

“Blockchain could significantly enhance upstream, midstream and downstream operations throughout the oil and gas sector. It has the potential to make a great deal of the sector’s bureaucracy significantly more efficient, for example making it easier and quicker to confirm when third-party suppliers complete tasks so that funds can be released in a far more timely way. It can also be used to monetise reserves in a way that has not previously been possible, tokenising confirmed but not yet exploited deposits to help investors, exploration and production firms, and refining and processing operations, manage their activities and balance sheets. “The deeper we look at the potential of the blockchain in the oil and gas sector, the wider the range of opportunities from digitalising global oilfield datasets becomes. Distributed ledger technology allows for permanent transparency on a trust-protocol that integrates cloud-based servers. The approach that we are taking requires graphic processing units and high-performance computers. ...”

Communicating the Importance of Engineering Work in Business Terms

Whenever you can, bring data to the discussion. This data should be metrics related to business outcomes. Measure things like bug rates, average time it takes to deliver a feature, employee satisfaction, and customer satisfaction. A great set of metrics to pull out are the ones that come from Accelerate. You may need to show how metrics such as lead time, deploy frequency, mean time to recovery, and change failure rate directly predict improved business outcomes. But as much as possible, use metrics that speak to the problems and concerns that are top of mind for your partner. For example, let's say you are seeing that engineers are really struggling to understand a particular component - it is complex, poorly tested and poorly documented. What's the business impact of this? Likely, it takes longer to ship a feature because of a long cycle of testing and debugging, and even when it is shipped, it's probably going to have more bugs. So maybe look at the time to build a feature that touches this component versus ones that don't, and see if you can show a significant difference.




Quote for the day:

"Good management is the art of making problems so interesting and their solutions so constructive that everyone wants to get to work and deal with them." -- Paul Hawken

Daily Tech Digest - January 21, 2022

Nuclear quantum computing: It’s coming

You can’t just upload a neural network to a quantum computer and expect to act like it’s been supercharged. The algorithms we’re currently able to run on cutting-edge quantum systems are more like super-challenging math problems that can still be verified using classical means. Unfortunately, the long and short of it is usually: the more qubits you have the more errors you get. The new research hopes to alleviate that by creating a new way to handle qubit operations, thus allowing gate-based quantum computer systems to scale. ... It’s likely just as safe as using lasers to create qubits out of light, maybe even safer. But the researchers are hoping it’s the foundation for a paradigm that will be much easier to scale than other systems. At the end of the day this is all exciting news. It’s rare to see a peer-reviewed quantum computing breakthrough because the field is incredibly challenging. Getting three in the same day is a eureka moment in its own right. Of course, it could take a while for these early experiments to pan out and turn into full-fledged quantum computers. 


Upholding digital ethics with identity and access management

One area closely aligned with ensuring digital ethics and putting in place the right protocols to cope with our new digital processes is human resources (HR). This part of the business has had to make notable changes over the last couple of years, as it has started to rely more heavily on technology. During the pandemic, HR processes such as hiring, conflict resolution, onboarding and offboarding, and other HR-related activities could no longer follow the same face-to-face processes they had historically; workarounds were needed. HR managers had to interview via Zoom; they were required to handle conflict resolutions remotely and virtually, and so much more. Coupled with this HR teams had a new challenge: to re-invent their processes to fit the new virtual world – while ensuring that this environment has the right digital ethics for the organisation. This is where an identity and access management solution (IAM) can help less technical individuals. In applying digital ethics, security of personnel data is paramount for organisations, and IAM solutions can help make some important security requirements of remote working easier to overcome Let’s look at how an IAM solution can ensure the security, ethics and privacy of data.


Data Fabrics: Six Top Use Cases

Data fabrics are central data management frameworks that allow organizations to access their data from any endpoint within a hybrid cloud environment. “They use technologies and services to enrich the data and make it more useful for users,” explains David Proctor, senior database manager at Everconnect, which remote database administration and support. Data fabrics are becoming increasingly popular as organizations turn to digital storage methods. As a company grows, storage can become more complex as data is stored in different locations that are inaccessible to other parts of the organization, Proctor observes. “Data fabrics standardize … and make data accessible for everyone regardless of their location/position in the company.” In a nutshell, data fabric technology is the glue that binds all an organization’s data systems together into a cohesive and uniform layer, says Sean Knapp, founder and CEO of Ascend.io, which offers an autonomous dataflow service. It allows data engineers to build, scale, and operate continuously optimized, Apache Spark-based pipelines with less code. 


UK Issues Fresh Proposals to Tackle Cyberthreats

The government has sought to widen the scope of the law to include Managed Service Providers, which provide specialized online and digital services such as security services, workplace services and IT outsourcing. "These firms are crucial to boosting the growth of the country's 150.6-billion-pound digital sector and have privileged access to their clients' networks and systems," the report says. "While the regulations apply to some digital services such as online marketplaces, online search engines and cloud computing, there has been an increase in the use and dependence on digital services for providing corporate needs such as information storage, data processing and running software." Expanding NIS regulations to include MSPs will allow smaller businesses to attain a higher level of cyber resilience, says Tim Mackey, principal security strategist at the Synopsys Cybersecurity Research Center. The recent Log4Shell vulnerability has illustrated that cyber resilience is a function of how well software supply chains are understood, he says.


Quantum computing is coming. Now is the right time to start getting ready

Evidence suggests that message is already getting through: three-quarters (74%) of senior executives believe organisations that fail to adopt quantum computing soon will fall behind quickly, according to a recent survey by quantum company Zapata Computing and Wakefield Research. Di Meglio believes the secret to successfully understanding where your business might potentially create a quantum advantage is to focus on developments that are already being made around new instruments, tools, and methods of collaboration. He says early preparatory work will help CIOs and their businesses to identify the right skills, technologies and partners for quantum success in the longer term. As part of this process, CIOs and their executive partners must look to build collaborative teams, where all the necessary skills for quantum are brought together and then exploited in the most useful way. "Quantum computing is a very multidisciplinary area. Organisations, institutions and universities really need to work to break the silos in-between these areas," he says.


The importance of securing machine-to-machine and human-to-machine interaction

The challenge associated with interconnecting and providing the right level of access to disparate workloads introduces a host of new security and compliance challenges. For instance, the sheer number of secrets used by machine-to-machine and human-to-machine interactions has proliferated dramatically due to automation, containerization, DevOps initiatives, and so on. In this hybrid multicloud environment I explained above, there is a risk of having separate islands of secrets. It is difficult for security teams to see how many secrets are in use overall, who uses them, and where. And if they can’t see them, how can they ensure they are safe? Another challenge associated with the automation/DevOps trends is how secrets are used. It is too often that we see secrets hardcoded in source code or configuration files, in plain text, which are then uploaded to public repositories such as GitHub. These secrets, and especially the ones used by privileged users such as network or security admins, and DevOps engineers, have traditionally been managed by Privileged Access Management (PAM) solutions.


Open source creates value, but how do you measure it?

Beyond updating our understanding of innovation outputs with open source, there are many more innovation questions: How does open source software contribute to innovation as an input, and can targeted research funding for open source increase this contribution? Further research should build on initial measurement efforts[7] to understand how and to what extent open source software accelerates scientific research; As open source business models have evolved over time, how have firm contributions to open source changed? Amid these business innovations, particularly the rise of cloud-based software as a service, what is the relative contribution to open source from these big cloud companies?; How do we value the contributions of innovations in developer tools to open source, including maintainers’ productivity and workload? ...; What is the economic impact—at both an organizational and economy-wide level—of new institutional approaches to open source, including the Open Source Program Office, pioneered in industry that is now percolating into the public and social sectors?


Why Artificial Intelligence (AI) pilot projects fail: 4 reasons

Not every person working on an AI-based project is an AI genius. However, successfully deploying an AI solution requires a general understanding by every employee and end-user. Everyone within an organization should understand the possibilities and limitations. With a lack of knowledge by all involved comes a lack of deployment. ... Everyone from executives to employees needs open feedback loops to allow for discussions on AI and getting people acquainted with the solution. Those more familiar with AI then have the opportunity to clearly communicate the level of interaction it requires to ensure everyone has the correct information needed for maximum efficiency. Leading the change management to implement AI for digital transformation success is not limited to the role of a CIO or IT team. Instead, businesses as a whole need to work together to ensure every department has the proper tools and technologies in place to their respective standards.


Closing the agile achievement gap

The primary role of the lean portfolio management (LPM) function in agile-minded organizations is to align agile development with business strategy. In most cases, this function is made up of staff from the organization’s finance, IT, and business units, and also draws on expertise and input from human resources and IT teams. Most important, the LPM function aligns the annual planning and funding processes with the agile methodology. It also establishes objectives and key results and key performance indicators (KPIs) to measure the effectiveness of the work being done and to keep deliverables on track. These tasks are often time-consuming and involve large change management efforts, which is why the LPM function must be implemented early in the process. A wholesale retail company needed to define and implement an LPM function at the outset of its agile transformation. The company needed to modernize its workforce and IT operating model and employ a product-centric mindset on projects.


HR and data: what gets measured gets improved

Used wisely, data has colossal power. This was recognised by the management theorist, Peter Drucker, who reportedly said, “What gets measured gets improved”. The trick is to understand the value of data, measure the right things and then make sense of it all to inform decisions. And huge swathes of the economy are now doing so – often using AI – to drive innovation and accelerate growth. Sadly, HR is lagging. When searching the top HR degrees in the UK, few of them focus on data as a major part of the job. Out of 39 modules, over three years, one degree course lists “managing data” just once. And if you ask most people why they got into HR, it’s about relationships — making people’s working lives better, supporting others and helping employees thrive. These are all vital, but it often means data is ignored, despite it having a huge role to play in meeting these goals. This is a fact recognised by the CIPD. It says too few organisations use HR data and analytics to help inform strategic decisions about how they invest in, manage and develop their workforce to deliver on their business strategy.



Quote for the day:

"If you don't find a leader, perhaps it is because you were meant to lead." -- Glenn Beck

Daily Tech Digest - September 29, 2021

Approaching Anomaly Detection in Transactional Data

Usually, people mean financial transactions when they talk about transactional data. However, according to Wikipedia, “Transactional Data is data describing an event (the change as a result of a transaction) and is usually described with verbs. Transaction data always has a time dimension, a numerical value and refers to one or more objects”. In this article, we will use data on requests made to a server (internet traffic data) as an example, but the considered approaches can be applied to most of the datasets falling under the aforementioned definition of transactional data. Anomaly Detection, in simple words, is finding data points that shouldn’t normally occur in a system that generated data. Anomaly detection in transactional data has many applications, here are a couple of examples: Fraud detection in financial transactions; Fault detection in manufacturing; Attack or malfunction detection in a computer network (the case covered in this article); Recommendation of predictive maintenance; and Health condition monitoring and alerting.


Apache Kafka: Core Concepts and Use Cases

The initial point that each and every individual who works with streaming applications ought to comprehend is the concept, which is a diminutive piece of data. For instance, when a user registers within the system, an event is created. You can likewise ponder on an event like a message with data, which can be processed and saved at a certain place if at all required. This event is the message wherein the data regarding details such as the user’s name, email, password, and so forth can be added. This highlights that Kafka is the platform that works well when it comes to streaming events. Events are continually composed by producers. They are called producers since they compose events or data to Kafka. There are numerous sorts of producers. Instances of clients include web servers, parts of applications, whole applications, IoT gadgets, monitoring specialists, and so on. A new user registration event can be produced by the component of the site that is liable for client registrations. 


How to Build a Regression Testing Strategy for Agile Teams

Regression testing is a process of testing the software and analyzing whether the change of code, update, or improvements of the application has not affected the software’s existing functionality. Regression testing in software engineering ensures the overall stability and functionality of existing features of the software. Regression testing ensures that the overall system stays sustainable under continuous improvements whenever new features are added to the code to update the software. Regression testing helps target and reduce the risk of code dependencies, defects, and malfunction, so the previously developed and tested code stays operational after the modification. Generally, the software undergoes many tests before the new changes integrate into the main development branch of the code. ... Automated regression testing is mainly used with medium and large complex projects when the project is stable. Using a thorough plan, automated regression testing helps to reduce the time and efforts that a tester spends on tedious and repeatable tasks and can contribute their time that requires manual attention like exploratory tests and UX testing.


Sam Newman on Information Hiding, Ubiquitous Language, UI Decomposition and Building Microservices

The ubiquitous language in many ways is the key stone of domain-driven design and it's amazing how many people skip it, and it's foundational. I think a lot of the reason that people skip ubiquitous language is because to understand what terms and terminology are used by the business side of your organization by the use of your software, it involves having to talk to people. It still stuns me how many enterprise architects have come up with a domain model by themselves without ever having spoken to anybody outside of IT. So this fundamentally, the ubiquitous language starts with having conversations. This is why I like event storming as a domain-driven design technique because it places primacy on having that kind of collective brainstorming activity where you get sort of maybe your non-developer, your non-technical stakeholders in the room and listen to what they're talking about and you're picking up their terms, their terminology, and you're trying to put those terms into your code.


Technical architecture: What IT does for a living

Technical architecture is the sum and substance of what IT deploys to support the enterprise. As such, its management is a key IT practice. We talked about how to go about it in a previous article in this series. Which leads to the question, What constitutes good technical architecture? Or more foundationally, What constitutes technical architecture, whether good, bad, or indifferent? In case you’re a purist, we’re talking about technical architecture, not enterprise architecture. The latter includes the business architecture as well as the technical architecture. Not that it’s possible to evaluate the technical architecture without understanding how well it supports the business architecture. It’s just that managing the health of the business architecture is Someone Else’s Problem. IT always has a technical architecture. In some organizations it’s deliberate, the result of processes and practices that matter most to CIOs. But far too often, technical architecture is accidental — a pile of stuff that’s accumulated over time without any overall plan.


Preparing for the 'golden age' of artificial intelligence and machine learning

"Implementing an AI solution is not easy, and there are many examples of where AI has gone wrong in production," says Tripti Sethi, senior director at Avanade. "The companies we have seen benefit from AI the most understand that AI is not a plug-and-play tool, but rather a capability that needs to be fostered and matured. These companies are asking 'what business value can I drive with data?' rather than 'what can my data do?'" Skills availability is one of the leading issues that enterprises face in building and maintaining AI-driven systems. Close to two-thirds of surveyed enterprises, 62%, indicated that they couldn't find talent on par with the skills requirements needed in efforts to move to AI. More than half, 54%, say that it's been difficult to deploy AI within their existing organizational cultures, and 46% point to difficulties in finding funding for the programs they want to implement. ... In recent months and years, AI bias has been in the headlines, suggesting that AI algorithms reinforce racism and sexism. 


Skilling in the IT sector for a post pandemic era – An Experts View

“When there’s a necessity, innovations follow,” said Mahipal Nair (People Development & Operations Leader, NielsenIQ). The company moved from people-interaction-dependent learning to digital methods to navigate skilling priorities. As consumer expectations change, leadership and social skills have become a priority for workplace performance. “The way to solve this is not just to transform current talent, but create relevant talent,” said Nilanjan Kar (CRO, Harappa). Echoing the sentiment, Kirti Seth (CEO, SSC NASSCOM) added that “learning should be about principles, and it should enable employees to make the basics their own.” This will help create a learning organization that can contextualize change across the industry to stay relevant and map the desired learning outcomes. While companies upskill their workforce on these priorities, the real question is what skills will be required? Anupal Banerjee (CHRO, Tata Technologies) noted that “with the change in skills, there are multiple levels to focus on. While one focus area is on technical skills, the second is on behavioral skills. ...”.


Re-evaluating Kafka: issues and alternatives for real-time

By nature, your Kafka deployment is pretty much guaranteed to be a large-scale project. Imagine operating an equally large-scale MySQL database that is used by multiple critical applications. You’d almost certainly need to hire a database administrator (or a whole team of them) to manage it. Kafka is no different. It’s a big, complex system that tends to be shared among multiple client applications. Of course it’s not easy to operate! Kafka administrators must answer hard design questions from the get-go. This includes defining how messages are stored in partitioned topics, retention, and team or application quotas. We won’t get into detail here, but you can think of this task as designing a database schema, but with the added dimension of time, which multiplies the complexity. You need to consider what each message represents, how to ensure it will be consumed in the proper order, where and how to enact stateful transformations, and much more — all with extreme precision.


Climbing to new heights with the aid of real-time data analytics

Enter hybrid analytics. The world of data management has been reimagined with analytics at the speed of transactions made possible, through simpler processes, and a single hybrid system breaking down the walls between transactions and analytics. It’s possible through hybrid analytics to avoid the movement of information from databases to data warehouses and allow simple real-time data processing. This innovation enables enhanced customer experiences and a more data-driven approach to decision making thanks to the deeper business insights delivered through a hybrid system. Thanks to hybrid analytics, real-time allows a faster time to insight. It’s also possible for businesses to better understand their customers with no long, complex processes while the feedback loop is also made shorter for increased efficiency. It’s this approach that delivers a data-driven competitive advantage for businesses. Both developers and database administrators can access and manage data far easier, only having to deal with one connected system with no database sprawl.


Why DevSecOps fails: 4 signs of trouble

When Haff says that some organizations make the mistake of not giving DevSecOps its due, he adds that the people and culture component is most often the glaring omission. Of course, it’s not actually “glaring” until you realize that your DevSecOps initiative has fallen flat and you start to wonder why. One way you might end up traveling this suboptimal path: You focus too much on technology as the end-all solution rather than a layer in a multi-faceted strategy. “They probably have adopted at least some of the scanning and other tooling they need to mitigate various types of threats. They’re likely implementing workflows that incorporate automation and interactive development,” Haff says. “What they’re less likely paying less attention to – and may be treating as an afterthought – is people and culture.” Just as DevOps was about more than a toolchain, DevSecOps is about more than throwing security technologies at various risks. “An organization can get all the tools and mechanics right but if, for example, developers and operations teams don’t collaborate with your security experts, you’re not really doing DevSecOps,” Haff says.



Quote for the day:

"Authentic leaders are often accused of being 'controlling' by those who idly sit by and do nothing" --John Paul Warren

Daily Tech Digest - December 18, 2020

Chaos Engineering: A Science-based Approach to System Reliability

While testing is standard practice in software development, it’s not always easy to foresee issues that can happen in production. Especially as systems become increasingly complex to deliver maximum customer value. The adoption of microservices enables faster release times and more possibilities than we’ve ever seen before, however they introduce challenges. According to the 2020 IDG cloud computing survey, 92 percent of organizations’ IT environments are at least somewhat in the cloud today. In 2020, we saw highly accelerated digital transformation as organizations had to quickly adjust to the impact of a global pandemic. With added complexity comes more possible points of failure. The trouble is that we humans managing these intricate systems cannot possibly understand or foresee all of the issues because it’s impossible to understand how each of the individual components of a loosely coupled architecture will relate to each other. This is where Chaos Engineering steps in to proactively create resilience. The major caveat of Chaos Engineering is that things are broken in a very intentional and controlled manner while in production, unlike regular QA practices, where this is done in safe development environments. It is methodical and experimental and less ‘chaotic’ than the name implies.


ECLASS presents the Distributed Ledger-based Infrastructure for Industrial Digital Twins

Advancing digitalization, increasing networking and horizontal integration in the areas of purchasing, logistics and production, as well as in the engineering, maintenance and operation of machines and products, are creating new opportunities and business models that were unimaginable before. Classic value chains are turning more and more into interconnected value networks in which partners can seamlessly find and exchange the relevant information. Machines, products and processes receive their Digital Twins, which represent all relevant aspects of the physical world in the information world. The combination of physical objects and their Digital Twins creates so-called Cyber Physical Systems. Over the complete lifecycle, the relevant product information and production data captured in the Digital Twin must be available to the partners in the value chain at any time and in any place. The digital representation of the real world in the information world, in the form of Digital Twins, is therefore becoming increasingly important. However, the desired horizontal and vertical integration and cooperation of all participants in the value network across company boundaries, countries, and continents can only succeed on the basis of common standards


Data Protection Bill won’t get cleared in its current version

Pande from Omidyar Network India said stakeholders of the data privacy regulations should consider making the concept of consent more effective and simple. The National Institute of Public Finance and Policy (NIPFP) administered a quiz in 2019 to test how well urban, English speaking college students understand privacy policies of Flipkart, Google, Paytm, Uber, and WhatsApp. The students only scored an average of 5.3 out of 10. The privacy policies were as complex as a Harvard Law Review paper, Pande said. Facebook’s Claybaugh, however, said that “despite the challenges of communicating with people about privacy, we do take pretty strong measures both in our data policy which is interactive, in relatively easy-to-understand language compared to, kind of, the terms of service we are used to seeing.” Lee, who earlier worked with Singapore’s Personal Data Protection Commission said challenges of a (DPA) are “manifold”. She said it must be ensured that the DPA is “independent” and is given necessary powers especially when it must regulate the government. The DPA must be staffed with the right people with knowledge of technical and legal issues involved, she added.


India approves game-changing framework against cyber threats

The office of National Security Advisor Ajit Doval, sources said, noted that with the increasing use of Internet of Things (IoT) devices, the risk will continue to increase manifold and the advent of 5G technology will further increase the security concerns resulting from telecom networks. Maintaining the integrity of the supply chain, including electronic components, is also necessary for ensuring security against malware infections. Telecom is also the critical underlying infrastructure for all other sectoral information infrastructure of the country such as power, banking and finance, transport, governance and the strategic sector. Security breaches resulting in compromise of confidentiality and integrity of information or in disruption of the infrastructure can have disastrous consequences. Sources said that in view of these issues, the NSA office had recommended a framework -- 'National Security Directive on Telecom Sector', which will address 5G and supply chain concerns. Under the provisions of the directive, in order to maintain the integrity of the supply chain security and in order to discourage insecure equipment in the network, government will declare a list of 'Trusted Sources/Trusted Products' for the benefit of the Telecom Service Providers (TSPs).


The case for HPC in the enterprise

Essentially, HPC is an incredibly powerful computing infrastructure built specifically to conduct intensive computational analysis. Examples include physics experiments that identify and predict black holes. Or modeling genetic sequencing patterns against disease and patient profiles. In the past year, the Amaro Lab at UC San Diego performed modeling on the COVID-19 coronavirus to an atomic level using one of the top supercomputers in the world at the Texas Advanced Computing Center (TACC). I hosted a webinar with folks from UCSD, TACC and Intel discussing their work here. Those types of compute intensive workloads are still happening. However, enterprises are also increasing their demand for compute intensive workloads. Enterprises are processing increasing amounts of data to better understand customers and business operations. At the same time, edge computing is creating an explosive number of new data sources. Due to the sheer amount of data, enterprises are leveraging automation through the form of machine learning and artificial intelligence to parse the data and gain insights while making faster and more accurate business decisions. Traditional systems architectures are simply not able to keep up with the data tsunami.


5 reasons IT should consider client virtualization

First is the compatibility to run different operating systems or different versions of the same operating system. For example, many enterprise workers are increasingly running applications that are cross-platform such as Linux applications for developers, Android for healthcare or finance, and Windows for productivity. Second is the potential to isolate workloads for better security. Note that different types of virtualization models co-exist to support the diverse needs of customers (and applications in general are getting virtualized for better cloud and client compatibility). The focus of this article is full client virtualization that enables businesses to take complete advantage of the capabilities of rich commercial clients including improved performance, security and resilience. Virtualization in the client is different from virtualization in servers. It’s not just about CPU virtualization, but also about creating a good end-user experience with, for example, better graphics, responsiveness of I/O, network, optimized battery life of mobile devices and more. A decade ago, the goal of client virtualization was to use a virtual machine for a one-off scenario or workload.


The top 6 use cases for a data fabric architecture

A data fabric architecture promises a way to deal with many of the security and governance issues being raised by new privacy regulations and the rise in security breach incidents. "By far the largest positive impact of a data fabric for organizations is the focus on enterprise-wide data security and governance as part of the deployment, establishing it as a fundamental, ongoing process," said Wim Stoop, director of product marketing at Cloudera. Data governance is often seen in isolation, tied to a use case like tackling regulatory compliance needs or departmental requirements in isolation. With a data fabric, organizations are required to take a step back and consider data management holistically. This delivers the self-service access to data and analytics businesses demand to experiment and quickly drive value from data. Such a degree of management, governance and security of data then also makes proving compliance -- both industry and regulatory -- more or less a side effect of having implemented the fabric itself. Although this is not a full solution, it greatly reduces the effort associated with adhering to compliance requirements. Platz cautioned that there is a wide gulf between a vision for a perfect data fabric and what is practical today. "In practice, many first versions of data fabric architectures look more like just another data lake," Platz said.


Malicious Browser Extensions for Social Media Infect Millions of Systems

"This could be used to gather credentials and other sensitive corporate data from the websites visited by the victim," he says. "We are preparing a technical blog post with more technical information and IoCs, but for now, we can share the ... malicious domains." The malicious extensions are the latest attempt by cybercriminals to hide code in add-ons for popular browsers. In February, independent researcher Jamila Kaya and Duo Security announced they had discovered more than 500 Chrome extensions that infected millions of users' browsers to steal data. In June, Awake Security reported more than 70 extensions in the Google Chrome Web store were downloaded more than 32 million times and which collected browsing data and credentials for internal websites. In its latest research, Avast found the third-party extensions would collect information about users whenever they clicked on a link, offering attackers the option to send users to an attacker-controlled URL before forwarding them to their destinations. The extensions also collect the users' birthdates, e-mail addresses, and information about the local system, including name of the device, its operating system, and IP addresses.


How to use Agile swarming techniques to get features done

Teams that concentrate on individual skills and tasks end up with some members far ahead and others grinding away at unfinished work. For example, a back-end developer is still working on a feature, while the front-end developer for that feature has finished coding. The front-end developer then starts coding the next feature. The team can design hooks into the code to let the front-end developers validate their work. However, a feature is not done until a team completes the whole thing, fully integrates it and tests it. Letting developers move asynchronously through the project might result in good velocity metrics, but those measures don't always translate to the team delivering the feature on time. If testers discover issues in a delivered feature, the entire team must return to already completed tasks. Let this scenario play out in a real software organization, and you end up with partially completed work on many disparate tasks, and nothing finished. The goal of Agile development is not to ensure the team is 100% busy, with each person grabbing new product backlog items as soon as they complete their prior task. This approach to development results in extensive multitasking and ultimately slows the flow of completed items.


Application Level Encryption for Software Architects

Unless well-defined, the task for application-level encryption is frequently underestimated, poorly implemented, and results in haphazard architectural compromises when developers find out that integrating a cryptographic library or service is just the tip of the iceberg. Whoever is formally assigned with the job of implementing encryption-based data protection, faces thousands of pages of documentation on how to implement things better, but very little on how to design things correctly. Design exercises turn out to be a bumpy ride every time you don’t expect the need for design and have a sequence of ad-hoc decisions because you anticipated getting things done quickly: First, you face key model and cryptosystem choice challenges, which hide under “which library/tool should I use for this?” Hopefully, you chose a tool that fits your use-case security-wise, not the one with the most stars on GitHub. Hopefully, it contains only secure and modern cryptographic decisions. Hopefully, it will be compatible with other team’s choices when the encryption has to span several applications/platforms. Then you face key storage and access challenges: where to store the encryption keys, how to separate them from data, what are integration points where the components and data meet for encryption/decryption, what is the trust/risk level toward these components?



Quote for the day:

"Public opinion is no more than this: What people think that other people think." -- Alfred Austin

Daily Tech Digest - October 24, 2020

How will self-driving cars affect public health?

The researchers created a conceptual model to systematically identify the pathways through which AVs can affect public health. The proposed model summarizes the potential changes in transportation after AV implementation into seven points of impact: transportation infrastructure; land use and the built environment; traffic flow; transportation mode choice; transportation equity; and jobs related to transportation and traffic safety. The changes in transportation are then attributed to potential health impacts. In optimistic views, AVs are expected to prevent 94% of traffic crashes by eliminating driver error, but AVs’ operation introduces new safety issues such as the potential of malfunctioning sensors in detecting objects, misinterpretation of data, and poorly executed responses, which can jeopardize the reliability of AVs and cause serious safety consequences in an automated environment. Another possible safety consideration is the riskier behavior of users because of their overreliance on AVs—for example, neglecting the use of seatbelts due to an increased false sense of safety. AVs have the potential to shift people from public transportation and active transportation such as walking and biking to private vehicles in urban areas, which can result in more air pollution and greenhouse gas emissions and create the potential loss of driving jobs for those in the public transit or freight transport industries.


Now’s The Time For Long-Term Thinking

For most financial institutions, the strategic planning process for 2021 is far different than any in the past. As opposed to an iterative adjustment to plans from the previous year, this year’s planning must take into account a level of change in technology, competition, consumer behaviors, society and many other areas that is far less defined than before. The uncertainty about the future requires a combination of a solid strategic foundation with sensing capabilities and the ability to respond to threats and opportunities as quickly as possible. For many banks and credit unions, this will require organizational restructuring, the reallocation of resources, revamping processes, finding new outside partners and a culture that will support flexibility in plans that never was required before. There is also the need to build a marketplace sensing capability across the entire organization and from a broader array of sources. This includes customers, internal staff (especially customer-facing employees), suppliers, strategic partners, research organizations, boards of directors and even competition. Gathering the insights is only half the battle. There must also be a centralized location to gather and analyze the insights collected.


Rapid Threat Evolution Spurs Crucial Healthcare Cybersecurity Needs

Cybercriminals have been actively taking advantage of the global pandemic, with an increase in cyberattacks, phishing, spear-phishing, and business email compromise (BEC) attempts. And on the healthcare side of things, NSCA Executive Director, Kelvin Coleman, said it’s not a huge surprise.  Even in the early 1900s during the Spanish flu pandemic, folks would put articles in newspapers to take advantage of the crisis with hoaxes and scams, Coleman explained. “Bad actors take advantage of crises,” he said. “Hackers are being aggressive, leveraging targeted emails and phishing attempts. Josh Corman, cofounder of IAmTheCalvary.org and DHS CISA Visiting Researcher, stressed that when a provider is forced into EHR downtime and to divert patient care, it’s even more nightmarish during a pandemic. In Germany, a patient died earlier this month after a ransomware attack shut down operations at a hospital, and she was diverted to another hospital. These are criminals without scruples, Corman explained. The attacks were happening before the pandemic, but there’s been no cease- fire amid the crisis. In healthcare, hackers continue to rely on previously successful attack methods – especially phishing. It continues to be a successful attack method. 


FBI, CISA: Russian hackers breached US government networks, exfiltrated data

US officials identified the Russian hacker group as Energetic Bear, a codename used by the cybersecurity industry. Other names for the same group also include TEMP.Isotope, Berserk Bear, TeamSpy, Dragonfly, Havex, Crouching Yeti, and Koala. Officials said the group has been targeting dozens of US state, local, territorial, and tribal (SLTT) government networks since at least February 2020. Companies in the aviation industry were also targeted, CISA and FBI said. The two agencies said Energetic Bear "successfully compromised network infrastructure, and as of October 1, 2020, exfiltrated data from at least two victim servers." The intrusions detailed in today's CISA and FBI advisory are a continuation of attacks detailed in a previous CISA and FBI joint alert, dated October 9. The previous advisory described how hackers had breached US government networks by combining VPN appliances and Windows bugs. Today's advisory attributes those intrusions to the Russian hacker group but also provides additional details about Energetic Bear's tactics. According to the technical advisory, Russian hackers used publicly known vulnerabilities to breach networking gear, pivot to internal networks, elevate privileges, and steal sensitive data.


Secure NTP with NTS

NTP can be secured well with symmetric keys. Unfortunately, the server has to have a different key for each client and the keys have to be securely distributed. That might be practical with a private server on a local network, but it does not scale to a public server with millions of clients. NTS includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. It uses Transport Layer Security (TLS) on TCP port 4460. It is designed to scale to very large numbers of clients with a minimal impact on accuracy. The server does not need to keep any client-specific state. It provides clients with cookies, which are encrypted and contain the keys needed to authenticate the NTP packets. Privacy is one of the goals of NTS. The client gets a new cookie with each server response, so it doesn’t have to reuse cookies. This prevents passive observers from tracking clients migrating between networks. The default NTP client in Fedora is chrony. Chrony added NTS support in version 4.0. The default configuration hasn’t changed. Chrony still uses public servers from the pool.ntp.org project and NTS is not enabled by default. Currently, there are very few public NTP servers that support NTS. The two major providers are Cloudflare and Netnod.


Non-Intimidating Ways To Introduce AI/ML To Children

The brainchild of IBM, Machine Learning for Kids is a free, web-based tool to introduce children to machine learning systems and applications of AI in the real world. Machine Learning for Kids is built by Dale Lane using APIs from IBM Watson. It provides hands-on experiments to train ML systems that recognise texts, images, sounds, and numbers. It leverages platforms such as Scratch and App Inventor to create interesting projects and games. It is also being used in schools as a significant resource to teach AI and ML to students. Teachers can also form their own admin page to manage their access to students. A product from the MIT Media Lab, Cognimates is an open-source AI learning platform for young children starting from age 7. Children can learn how to build games, robots, and train their own AI modes. Like Machine Learning for Kids, Cognimates is also based on Scratch programming language. It provides a library of tools and activities for learning AI. This platform even allows children to program intelligent devices such as Alexa. Another offering from Google in order to make learning AI fun and engaging is AIY. The name is an intelligent wordplay with AI and do-it-yourself (DIY).


How RPA differs from conversational AI, and the benefits of both

Enterprises are working to digitally transform core business processes to enable greater automation of backend processes and to encourage more seamless customer experiences and self-service at the frontend. We are seeing banks, insurers, retailers, energy providers and telcos working to develop their own digital assistants with a growing number of skills, while still providing a consistent brand experience. Developing bots doesn’t have to be complex. It is more important to carefully identify the right use cases where these technologies will deliver clear ROI with the least amount of effort. Whether an enterprise is applying RPA or conversational AI, or both, it’s important to first understand the business problem that needs to be solved, and then identify where bots will make an immediate difference. Then consider the investment required, barriers to successful implementation, and the expected business outcomes. It’s better to start small with a narrowly focused use case and achievable KPIs, rather than trying to do too much at once. Conversational AI and RPA are very powerful automation technologies. When designed well, a chatbot can automate up to 80% of routine queries that come into a customer service centre or IT helpdesk, saving an organisation time and money and enabling it to scale its operations.


Things to consider when running visual tests in CI/CD pipelines: Getting Started

Testing – it’s an important part of a developer’s day-to-day, but it’s also crucial to the operations engineer. In a world where DevOps is more than just a buzzword, where it’s become accepted as a mindset shift and culture change, we all need to consider running quality tests. Traditional testing may include UI testing, integration testing, code coverage checks, and so forth, but at some point, we still need eyeballs on a physical page. How many times have we seen a funny looking page because of CSS errors? Or worse yet, an important button like say, “Buy now” “missing” because someone changed the CSS and now the button blends in with the background? Logically, the page still works, and even from a traditional test perspective, the button can be clicked, and the DOM (used in UI Test verification) is perfect. Visually, however, the page is broken; this is where visual testing comes into play. Visual testing allows us to use automated UI testing with the power of AI to help us determine if a page “looks right” aside from just “functions right.” Earlier this year, I partnered with Angie Jones from Applitools in a joint webinar where we talked about best practices as it pertains to both Visual Testing and also CI/CD. This blog post is a summary of that webinar and how to handle visual testing in CI/CD.


Design patterns – for faster, more reliable programming

Every design has a pattern and everything has a template, whether it be a cup, house, or dress. No one would consider attaching a cup’s handle to the inside – apart from novelty item manufacturers. It has simply been proven that these components should be attached to the outside for practical purposes. If you are taking a pottery class and want to make a pot with handles, you already know what the basic shape should be. It is stored in your head as a design pattern, in a manner of speaking. The same general idea applies to computer programming. Certain procedures are repeated frequently, so it was no great leap to think of creating something like pattern templates. In our guide, we will show you how these design patterns can simplify programming. The term “design pattern” was originally coined by the American architect Christopher Alexander who created a collection of reusable patterns. His plan was to involve future users of the structures in the design process. This idea was then adopted by a number of computer scientists. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (sometimes referred to as the Gang of Four or GoF) helped software patterns break through and gain acceptance with their book “Design Patterns – Elements of Reusable Object-Oriented Software” in 1994.


Public and Private Blockchain: How to Differentiate Them and Their Use Cases

Public blockchain is the model of Bitcoin, Ethereum, and Litecoin and is essentially considered to be the original distributed ledger structure. This type of blockchain is completely open and anyone can join and participate in the network. It can receive and send transactions from anybody in the world, and can also be audited by anyone who is in the system. Each node (a computer connected to the network) has as much transmission and power as any other, making public blockchains not only decentralized, but fully distributed, as well. ... Private blockchains, on the other hand, are essentially forks of the originator but are deployed in what is called a permissioned manner. In order to gain access to a private blockchain network, one must be invited and then validated by either the network starter or by specific rules that were put into place by the network starter. Once the invitation is accepted, the new entity can contribute to the maintenance of the blockchain in the customary manner. Due to the fact that the blockchain is on a closed network, it offers the benefits of the technology but not necessarily the distributed characteristics of the public blockchain.



Quote for the day:

"Every moment is a golden one for those who have the vision to recognize it as such." -- Henry Miller