Showing posts with label Process Mining. Show all posts
Showing posts with label Process Mining. Show all posts

Daily Tech Digest - May 31, 2024

Flawed AI Tools Create Worries for Private LLMs, Chatbots

The research underscores that the rush to integrate AI into business processes does pose risks, especially for companies that are giving LLMs and other generative-AI applications access to large repositories of data. ... The risks posed by the adoption of next-gen artificial intelligence and machine learning (AI/ML) are not necessarily due to the models, which tend to have smaller attack surfaces, but the software components and tools for developing AI applications and interfaces, says Dan McInerney, lead AI threat researcher with Protect AI, an AI application security firm. "There's not a lot of magical incantations that you can send to an LLM and have it spit out passwords and sensitive info," he says. "But there's a lot of vulnerabilities in the servers that are used to host LLMs. The [LLM] is really not where you're going to get hacked — you're going to get hacked from all the tools you use around the LLM." ... "Exploitation of this vulnerability could affect the immediate functioning of the model and can have long-lasting effects on its credibility and the security of the systems that rely on it," Synopsys stated in its advisory. 


Cyber resiliency is a key focus for us: Balaji Rao, Area VP – India & SAARC, Commvault

Referring to the classical MITRE framework, the recommendation is to “shift right” – moving focus towards recovery. After thoroughly assessing risks and implementing various tools, it’s crucial to have a solid recovery plan in place. Customers are increasingly concerned about scenarios where both their primary and disaster recovery (DR) systems are compromised by ransomware, and their backups are unavailable. According to a Microsoft report, in 98% of successful ransomware cases, backups are disabled. To address this concern, the strategy involves building a cyber resilient framework that prioritises recovery. ... For us, AI serves multiple purposes, primarily enhancing efficiency, scanning for threats, and addressing customer training and enablement needs. From a security perspective, we leverage AI extensively to detect ransomware-related risks. Its rapid data processing capabilities allow for thorough scanning across vast datasets, enabling pattern matching and identifying changes indicative of potential threats. We’ve integrated AI into our threat scanning solutions, strengthening our ability to detect and mitigate malware by leveraging comprehensive malware databases.


The importance of developing second-line leaders

Developing second-line leaders helps your business unit or function succeed at a whole new level: When your teams know that leadership development is a priority, they start preparing for future roles. The top talent will cultivate their skills and equip themselves for leadership positions, enhancing overall team performance. As the cascading effect builds, this proactive development has a multiplicative impact, especially if competition within the team remains healthy. It's also important for your personal growth as a leader: The most fulfilling aspect is the impact on yourself. Measuring your leadership success by contribution, attribution, and legacy, developing capable successors fulfils all three criteria. It ensures you contribute effectively, gain recognition for building strong teams, and leave a lasting legacy through the leaders you've developed. ... It starts with the self. Begin with delegation without abdication or evasion of accountability. This skill is a cornerstone of effective leadership, involving the entrusting of responsibilities to others while empowering them to assume ownership and make informed decisions.


Navigating The AI Revolution: Balancing Risks And Opportunities

Effective trust management requires specific approaches, such as robust monitoring systems, rigorous auditing processes and well-defined incident response plans. More importantly, in order for any initiative to address AI risks to be successful, we as an industry need to build a workforce of trained professionals. Those operating in the digital trust domain, including cybersecurity, privacy, assurance, risk and governance of digital technology, need to understand AI before building controls around it. The ISACA AI survey revealed that 85% of digital trust professionals say they will need to increase their AI skills and knowledge within two years to advance or retain their jobs. This highlights the importance of continuous learning and adaptation for cybersecurity professionals in the era of AI. Gaining a deeper understanding of how AI-powered attacks are altering the threat landscape, along with how AI can be effectively utilized by security practitioners, will be essential. As security professionals learn more about AI, they need to ensure that the methods being deployed align with an enterprise’s overarching need to maintain trust with its stakeholders.


CISO‘s Guide to 5G Security: Risks, Resilience and Fortifications

A strong security posture requires granular visibility into 5G traffic and automated security enforcement to effectively thwart attackers, protect critical services, and safeguard against potential threats to assets and the environment. This includes a focus on detecting and preventing attacks at all layers, interface and threat vector — from equipment (PEI) and subscriber (SUPI) identification, applications, signaling, data, network slices, malware, ransomware and more. ... To accomplish the task at hand brought about by 5G, CISOs must be prepared to provide a swift response to known and unknown threats in real time with advanced AI and machine learning, automation and orchestration tools. As connotation shifts from viewing 4G as a more consumer-focused mobile network to the power of private 5G when embedded across enterprise infrastructure, any kind of lateral network movement can bring about damage. ... Strategy and solution start with zero trust and can go as far as an entire 5G SOC dedicated to the nuances brought about by the next-gen network. The change and progress 5G promises is only as significant as our ability to protect networks and infrastructure from malicious actors, threats, and attacks.


Cloud access security brokers (CASBs): What to know before you buy

CASBs sit between an organization’s endpoints and cloud resources, acting as a gateway that monitors everything that goes in or out, providing visibility into what users are doing in the cloud, enforcing access control policies, and looking out for security threats. ... The original use case for CASBs was to address shadow IT. When security execs deployed their first CASB tools, they were surprised to discover how many employees had their own personal cloud storage accounts, where they squirreled away corporate data. CASB tools can help security teams discover and monitor unauthorized or unmanaged cloud services being used by employees. ... Buying a CASB tool can be complex. There’s a laundry list of possible features that fall within the broad CASB definition (DLP, SWG, etc.) And CASB tools themselves are part of a larger trend toward SSE and SASE platforms that include features such as ZTNA or SD-WAN. Enterprises need to identify their specific pain points — whether that’s regulatory compliance or shadow IT — and select a vendor that meets their immediate needs and can also grow with the enterprise over time.


What is model quantization? Smaller, faster LLMs

Why do we need quantization? The current large language models (LLMs) are enormous. The best models need to run on a cluster of server-class GPUs; gone are the days where you could run a state-of-the-art model locally on one GPU and get quick results. Quantization not only makes it possible to run a LLM on a single GPU, it allows you to run it on a CPU or on an edge device. ... As you might expect, accuracy may be an issue when you quantize a model. You can evaluate the accuracy of a quantized model against the original model, and decide whether the quantized model is sufficiently accurate for your purposes. For example, TensorFlow Lite offers three executables for checking the accuracy of quantized models. You might also consider MQBench, a benchmark and framework for evaluating quantization algorithms under real-world hardware deployments that uses PyTorch. If the degradation in accuracy from post-training quantization is too high, then one alternative is to use quantization aware training.


Europe Declares War on Tech Spoofing

In the new Payment Services Regulation, members of the European Parliament argued that messaging services such as WhatsApp, digital platforms such as Facebook, or marketplaces such as Amazon and eBay could be liable for scams that originate on their platforms, on a par with banks and other payment service providers. ... Europe’s new payment regulations are now up for negotiation in Brussels. Large US tech firms and messaging apps are pushing to lower the liability risk. They argue banks, not them, should be responsible. With spoofing or impersonation scams, the fraudulent transaction occurs on banking service portals, not the platforms. And so, banks themselves should enhance their security measures or pay the price. Banks, not surprisingly, disagree. They cannot control the entry points that fraudsters use to reach consumers, whether it is by phone, messaging apps, online ads, or the dark web. Why shouldn’t telecom network operators, messaging, and other digital platforms also be obliged to avoid fraudsters from reaching consumers and if they fail, be held liable?


Process mining helps IT leaders modernize business operations

Process mining provides the potential to enable organizations make quicker, more informed decisions when overhauling business processes by leveraging data for insights. By using the information gleaned from process mining, companies can better streamline workflows, enhance resource allocation, and automate repetitive tasks. ... Successful deployment and maintenance of process mining requires a clear vision from the management team and board, Mortello says, as well as commitment and persistence. “Process mining doesn’t usually yield immediate, tangible results, but it can offer unique insights into how a company operates,” he says. “A leadership team with a long-term vision is crucial to ensure the technology is utilized to its full potential.” It’s also important to thoroughly analyze processes prior to “fixing” them. “Make sure you have a good handle on the process you think you have and the ones you really have,” Constellation Research’s Wang says. “What we see across the board is a quick realization that what’s assumed and what’s done is very different.”


Could the Next War Begin in Cyberspace?

In a cyberwar, disinformation campaigns will likely be used to spread misinformation and collect data that can be leveraged to sway public opinion on key issues, Janzen says. "We can build very sophisticated security systems, but so long as we have people using those systems, they will be targeted to willingly or unwillingly allow malicious actors into those systems." ... How long a cyberspace war might last is inherently unpredictable, characterized by its persistent and ongoing nature, Menon says. "In contrast to conventional wars, marked by distinct start and end points, cyber conflicts lack geographical constraints," he notes. "These battles involve continuous attacks, defenses, and counterattacks." The core of cyberspace warfare lies in understanding algorithms, devising methods to breach them, and inventing new technologies to dismantle legacy systems, Menon says. "These factors, coupled with the relatively low financial investment required, contribute to the sporadic and unpredictable nature of cyberwars, making it challenging to anticipate when they may commence."



Quote for the day:

"It's fine to celebrate success but it is more important to heed the lessons of failure." -- Bill Gates

Daily Tech Digest - July 28, 2023

Cyber criminals pivot away from ransomware encryption

“Data theft extortion is not a new phenomenon, but the number of incidents this quarter suggests that financially motivated threat actors are increasingly seeing this as a viable means of receiving a final payout,” wrote report author Nicole Hoffman. “Carrying out ransomware attacks is likely becoming more challenging due to global law enforcement and industry disruption efforts, as well as the implementation of defences such as increased behavioural detection capabilities and endpoint detection and response (EDR) solutions,” she said. In the case of Clop’s attacks, Hoffman observed that it was “highly unusual” for a ransomware group to so consistently exploit zero-days given the sheer time, effort and resourcing needed to develop exploits. She suggested this meant that Clop likely has a level of sophistication and funding that is matched only by state-backed advanced persistent threat actors. Given Clop’s incorporation of zero-days in MFT products into its playbook, and its rampant success in doing so 


Get the best value from your data by reducing risk and building trust

Data risk is potentially detrimental to the business due to data mismanagement, inadequate data governance, and poor data security. Data risk that isn’t recognized and mitigated can often result in a costly security breach. To improve security posture, enterprises need to have an effective strategy for managing data, ensure data protection is compliant with regulations and look for solutions that provide access controls, end-to-end encryption, and zero-trust access, for example. Assessing data risk is not a tick-box exercise. The attack landscape is constantly changing, and enterprises must assess their data risk regularly to evaluate their security and privacy best practices. Data subject access requests are when an individual submits an inquiry asking how their personal data is harvested, stored, and used. It is a requirement of several data privacy regulations, including GDPR. It is recommended that enterprises automate these data subject requests to make them easier to track, preserve data integrity, and are handled swiftly to avoid penalties.


Why Developers Need Their Own Observability

The goal of operators’ and site reliability engineers’ observability efforts are straightforward: Aggregate logs and other telemetry, detect threats, monitor application and infrastructure performance, detect anomalies in behavior, prioritize those anomalies, identify their root causes and route discovered problems to their underlying owner. Basically, operators want to keep everything up and running — an important goal but not one that developers may share. Developers require observability as well, but for different reasons. Today’s developers are responsible for the success of the code they deploy. As a result, they need ongoing visibility into how the code they’re working on will behave in production. Unlike operations-focused observability tooling, developer-focused observability focuses on issues that matter to developers, like document object model (DOM) events, API behavior, detecting bad code patterns and smells, identifying problematic lines of code and test coverage. Observability, therefore, means something different to developers than operators, because developers want to look at application telemetry data in different ways to help them solve code-related problems.


Understanding the value of holistic data management

Data holds valuable insights into customer behaviour, preferences and needs. Holistic management of data enables organisations to consolidate and analyse their customers’ data from multiple sources, leading to a comprehensive understanding of their target audience. This knowledge allows companies to tailor their products, services and marketing efforts to better meet customer expectations, which can result in improved customer satisfaction and loyalty. Organisations can in some on-market tools draw relationships between their customers to see the physical relationships. Establishing customer relationships can be very beneficial, especially for target marketing. To demonstrate this point, for example, an e-mail arrives in your inbox shortly before your anniversary date, suggesting a specifically tailor-made gift for your partner. It is extremely important for an organisation to have a competitive-edge and to stay relevant. Data that is not holistically managed will slow down the organisation's ability to make timely and informed decisions, hindering its ability to respond quickly to changing market dynamics and stay ahead of its competitors.


Why Today's CISOs Must Embrace Change

While this is a long-standing challenge, I've seen the tide turn over the past four or five years, especially when COVID happened. Just the nature of the event necessitated dramatic change in organizations. During the pandemic, CISOs who said "no, no, no," lost their place in the organization, while those who said yes and embraced change were elevated. Today we're hitting an inflection point where organizations that embrace change will outpace the organizations that don't. Organizations that don't will become the low-hanging fruit for attackers. We need to adopt new tools and technologies while, at the same time, we help guide the business across the fast-evolving threat landscape. Speaking of new technologies, I heard someone say AI and tools won't replace humans, but the humans that leverage those tools will replace those that don't. I really like that — these tools become the "Iron Man" suit for all the folks out there who are trying to defend organizations proactively and reactively. Leveraging all those tools in combination with great intelligence, I think, enables organizations to outpace the organizations that are moving more slowly and many adversaries.


Navigating Digital Transformation While Cultivating a Security Culture

When it comes to security and digital transformation, one of the first things that comes to mind for Reynolds is the tech surface. “As you evolve and transition from legacy to new, both stay parallel running, right? Being able to manage the old but also integrate the new, but with new also comes more complexity, more security rules,” he says. “A good example is cloud security. While it’s great for onboarding and just getting stuff up and running, they do have this concept of shared security where they manage infrastructure, they manage the storage, but really, the IAM, the access management, the network configuration, and ingress and egress traffic from the network are still your responsibility. And as you evolve to that and add more and more cloud providers, more integrations, it becomes much more complex.” “There’s also more data transference, so there are a lot of data privacy and compliance requirements there, especially as the world evolves with GDPR, which everyone hopefully by now knows.


Breach Roundup: Zenbleed Flaw Exposes AMD Ryzen CPUs

A critical vulnerability affecting AMD's Zen 2 processors, including popular CPUs such as the Ryzen 5 3600, was uncovered by Google security researcher Tavis Ormandy. Dubbed Zenbleed, the flaw allows attackers to steal sensitive data such as passwords and encryption keys without requiring physical access to the computer. Tracked as CVE-2023-20593, the vulnerability can be exploited remotely, making it a serious concern for cloud-hosted services. The vulnerability affects the entire Zen 2 product range, including AMD Ryzen and Ryzen Pro 3000/4000/5000/7020 series, and the EPYC "Rome" data center processors. Data can be transferred at a rate of 30 kilobits per core, per second, allowing information extraction from various software running on the system, including virtual machines and containers. Zenbleed operates without any special system calls or privileges, making detection challenging. While AMD released a microcode patch for second-generation Epyc 7002 processors, other CPU lines will have to wait until at least October 2023. 


The Role of Digital Twins in Unlocking the Cloud's Potential

A DT, in essence, is a high-fidelity virtual model designed to mirror an aspect of a physical entity accurately. Let’s imagine a piece of complex machinery in a factory. This machine is equipped with numerous sensors, each collecting data related to critical areas of functionality from temperature to mechanical stress, speed, and more. This vast array of data is then transmitted to the machine’s digital counterpart. With this rich set of data, the DT becomes more than just a static replica. It evolves into a dynamic model that can simulate the machinery’s operation under various conditions, study performance issues, and even suggest potential improvements. The ultimate goal of these simulations and studies is to generate valuable insights that can be applied to the original physical entity, enhancing its performance and longevity. The resulting architecture is a dual Cyber-Physical System with a constant flow of data that brings unique insights into the physical realm from the digital realm.


The power of process mining in Power Automate

Having tools that identify and optimize processes is an important foundation for any form of process automation, especially as we often must rely on manual walkthroughs. We need to be able to see how information and documents flow through a business in order to be able to identify places where systems can be improved. Maybe there’s an unnecessary approval step between data going into line-of-business applications and then being booked into a CRM tool, where it sits for several days. Modern process mining tools take advantage of the fact that much of the data in our businesses is already labeled. It’s tied to database tables or sourced from the line-of-business applications we have chosen to use as systems of record. We can use these systems to identify the data associated with, say, a contract, and where it needs to be used, as well as who needs to use it. With that data we can then identify the process flows associated with it, using performance indicators to identify inefficiencies, as well as where we can automate manual processes—for example, by surfacing approvals as adaptive cards in Microsoft Teams or in Outlook.


Data Program Disasters: Unveiling the Common Pitfalls

In the realm of data management, it’s tempting to be swayed by the enticing promises of new tools that offer lineage, provenance, cataloguing, observability, and more. However, beneath the glossy marketing exterior lies the lurking devil of hidden costs that can burn a hole in your wallet. Let’s consider an example: while you may have successfully negotiated a reduction in compute costs, you might have overlooked the expenses associated with data egress. This oversight could lead to long-term vendor lock-in or force you to spend the hard-earned savings secured through skilful negotiation on the data outflow. This is just one instance among many; there are live examples where organizations have chosen tools solely based on their features and figured lately that such tools needed to fully comply with the industry’s regulations or the country they operate in. In such cases, you’re left with two options: either wait for the vendor to become compliant, severely stifling your Go-To-Market strategy or supplement your setup with additional services, effectively negating your cost-saving efforts and bloating your architecture.



Quote for the day:

"It's very important in a leadership role not to place your ego at the foreground and not to judge everything in relationship to how your ego is fed." -- Ruth J. Simmons

Daily Tech Digest - April 28, 2021

The Rise of Cognitive AI

There is a strong push for AI to reach into the realm of human-like understanding. Leaning on the paradigm defined by Daniel Kahneman in his book, Thinking, Fast and Slow, Yoshua Bengio equates the capabilities of contemporary DL to what he characterizes as “System 1” — intuitive, fast, unconscious, habitual, and largely resolved. In contrast, he stipulates that the next challenge for AI systems lies in implementing the capabilities of “System 2” — slow, logical, sequential, conscious, and algorithmic, such as the capabilities needed in planning and reasoning. In a similar fashion, Francois Chollet describes an emergent new phase in the progression of AI capabilities based on broad generalization (“Flexible AI”), capable of adaptation to unknown unknowns within a broad domain. Both these characterizations align with DARPA’s Third Wave of AI, characterized by contextual adaptation, abstraction, reasoning, and explainability, with systems constructing contextual explanatory models for classes of real-world phenomena. These competencies cannot be addressed just by playing back past experiences. One possible path to achieve these competencies is through the integration of DL with symbolic reasoning and deep knowledge.


Singapore puts budget focus on transformation, innovation

Plans are also underway to enhance the Open Innovation Platform with new features to link up companies and government agencies with relevant technology providers to resolve their business challenges. A cloud-based digital bench, for instance, would help facilitate virtual prototyping and testing, Heng said. The Open Innovation Platform also offers co-funding support for prototyping and deployment, he added. The Building and Construction Authority, for example, was matched with three technology providers -- TraceSafe, TagBox, and Nervotec -- to develop tools to enable the safe reopening of worksites. These include real-time systems that have enabled construction site owners to conduct COVID-19 contact tracing and health monitoring of their employees. Enhancements would alsobe made for the Global Innovation Alliance, which was introduced in 2017 to facilitate cross-border partnerships between Singapore and global innovation hubs. Since its launch, more than 650 students and 780 Singapore businesses had participated in innovation launchpads overseas, of which 40% were in Southeast Asia, according to Heng.


Machine learning security vulnerabilities are a growing threat to the web, report highlights

Most machine learning algorithms require large sets of labeled data to train models. In many cases, instead of going through the effort of creating their own datasets, machine learning developers search and download datasets published on GitHub, Kaggle, or other web platforms. Eugene Neelou, co-founder and CTO of Adversa, warned about potential vulnerabilities in these datasets that can lead to data poisoning attacks. “Poisoning data with maliciously crafted data samples may make AI models learn those data entries during training, thus learning malicious triggers,” Neelou told The Daily Swig. “The model will behave as intended in normal conditions, but malicious actors may call those hidden triggers during attacks.” Neelou also warned about trojan attacks, where adversaries distribute contaminated models on web platforms. “Instead of poisoning data, attackers have control over the AI model internal parameters,” Neelou said. “They could train/customize and distribute their infected models via GitHub or model platforms/marketplaces.”


Demystifying the Transition to Microservices

The very first step you should be taking is to embrace container technology. The biggest difference between a service-oriented architecture and a microservice-oriented architecture is that in the second one, the deployment is so complex, there are so many pieces with independent lifecycles, and each piece needs to have some custom configuration that it can no longer be managed manually. In a service-oriented architecture, with a handful of monolithic applications, the infrastructure team can still treat each of them as a separate application and manage them individually in terms of the release process, monitoring, health check, configuration, etc. With microservices, this is not possible with a reasonable cost. There will eventually be hundreds of different 'applications,' each of them with its own release cycle, health check, configuration, etc., so their lifecycle has to be managed automatically. There may be other technologies to do so, but microservices have become almost a synonym of containers. Not only Docker containers manually started, but you will also need an orchestrator. Kubernetes or Docker Swarm are the most popular ones.


Ransomware: don’t expect a full recovery, however much you pay

Remember also that an additional “promise” you are paying for in many contemporary ransomware attacks is that the criminals will permanently and irrevocably delete any and all of the files they stole from your network while the attack was underway. You’re not only paying for a positive, namely that the crooks will restore your files, but also for a negative, namely that the crooks won’t leak them to anyone else. And unlike the “how much did you get back” figure, which can be measured objectively simply by running the decryption program offline and seeing which files get recovered, you have absolutely no way of measuring how properly your already-stolen data has been deleted, if indeed the criminals have deleted it at all. Indeed, many ransomware gangs handle the data stealing side of their attacks by running a series of upload scripts that copy your precious files to an online file-locker service, using an account that they created for the purpose. Even if they insist that they deleted the account after receiving your money, how can you ever tell who else acquired the password to that file locker account while your files were up there?


Linux Kernel Bug Opens Door to Wider Cyberattacks

Proc is a special, pseudo-filesystem in Unix-like operating systems that is used for dynamically accessing process data held in the kernel. It presents information about processes and other system information in a hierarchical file-like structure. For instance, it contains /proc/[pid] subdirectories, each of which contains files and subdirectories exposing information about specific processes, readable by using the corresponding process ID. In the case of the “syscall” file, it’s a legitimate Linux operating system file that contains logs of system calls used by the kernel. An attacker could exploit the vulnerability by reading /proc/<pid>/syscall. “We can see the output on any given Linux system whose kernel was configured with CONFIG_HAVE_ARCH_TRACEHOOK,” according to Cisco’s bug report, publicly disclosed on Tuesday.. “This file exposes the system call number and argument registers for the system call currently being executed by the process, followed by the values of the stack pointer and program counter registers,” explained the firm. “The values of all six argument registers are exposed, although most system call use fewer registers.”


Process Mining – A New Stream Of Data Science Empowering Businesses

It is needless to emphasise that Data is the new Oil, as Data has shown us time on time that, without it, businesses cannot run now. We need to embrace not just the importance but sheer need of Data these days. Every business runs the onset of processes designed and defined to make everything function smoothly, which is achieved through – Business Processes Management. Each Business Process has three main pillars – Business Steps, Goals and Stakeholders, where series of Steps are performed by certain Stakeholders to achieve a concrete goal. And, as we move into the future where the entire businesses are driven by Data Value Chain which supports the Decision Systems, we cannot ignore the usefulness of Data Science combined with Business Process Management. And this new stream of data science is called Process Mining. As quoted by Celonis, a world-leading Process Mining Platform provider, that; “Process mining is an analytical discipline for discovering, monitoring, and improving processes as they actually are (not as you think they might be), by extracting knowledge from event logs readily available in today’s information systems.


Alexandria in Microsoft Viva Topics: from big data to big knowledge

Project Alexandria is a research project within Microsoft Research Cambridge dedicated to discovering entities, or topics of information, and their associated properties from unstructured documents. This research lab has studied knowledge mining research for over a decade, using the probabilistic programming framework Infer.NET. Project Alexandria was established seven years ago to build on Infer.NET and retrieve facts, schemas, and entities from unstructured data sources while adhering to Microsoft’s robust privacy standards. The goal of the project is to construct a full knowledge base from a set of documents, entirely automatically. The Alexandria research team is uniquely positioned to make direct contributions to new Microsoft products. Alexandria technology plays a central role in the recently announced Microsoft Viva Topics, an AI product that automatically organizes large amounts of content and expertise, making it easier for people to find information and act on it. Specifically, the Alexandria team is responsible for identifying topics and rich metadata, and combining other innovative Microsoft knowledge mining technologies to enhance the end user experience.


How Vodafone Greece Built 80 Java Microservices in Quarkus

The company now has 80 Quarkus microservices running in production with another 50-60 Spring microservices remaining in maintenance mode and awaiting a business motive to update. Vodafone Greece’s success wasn’t just because of Sotiriou’s technology choices — he also cited organizational transitions the company made to encourage collaboration. “There is also a very human aspect in this. It was a risk, and we knew it was a risk. There was a lot of trust required for the team, and such a big amount of trust percolated into organizing a small team around the infrastructure that would later become the shared libraries or common libraries. When we decided to do the migration, the most important thing was not to break the business continuity. The second most important thing was that if we wanted to be efficient long term, we’d have to invest in development and research. We wouldn’t be able to do that if we didn’t follow a code to invest part of our time into expanding our server infrastructure,” said Sotiriou. That was extra important for a team that scaled from two to 40 in just under three years.


The next big thing in cloud computing? Shh… It’s confidential

The confidential cloud employs these technologies to establish a secure and impenetrable cryptographic perimeter that seamlessly extends from a hardware root of trust to protect data in use, at rest, and in motion. Unlike the traditional layered security approaches that place barriers between data and bad actors or standalone encryption for storage or communication, the confidential cloud delivers strong data protection that is inseparable from the data itself. This in turn eliminates the need for traditional perimeter security layers, while putting data owners in exclusive control wherever their data is stored, transmitted, or used. The resulting confidential cloud is similar in concept to network micro-segmentation and resource virtualization. But instead of isolating and controlling only network communications, the confidential cloud extends data encryption and resource isolation across all of the fundamental elements of IT, compute, storage, and communications. The confidential cloud brings together everything needed to confidentially run any workload in a trusted environment isolated from CloudOps insiders, malicious software, or would-be attackers.



Quote for the day:

"Lead, follow, or get out of the way." -- Laurence J. Peter

Daily Tech Digest - February 06, 2021

Artificial intelligence must not be allowed to replace the imperfection of human empathy

In the perfectly productive world, humans would be accounted as worthless, certainly in terms of productivity but also in terms of our feeble humanity. Unless we jettison this perfectionist attitude towards life that positions productivity and “material growth” above sustainability and individual happiness, AI research could be another chain in the history of self-defeating human inventions. Already we are witnessing discrimination in algorithmic calculations. Recently, a popular South Korean chatbot named Lee Luda was taken offline. “She” was modelled after the persona of a 20-year-old female university student and was removed from Facebook messenger after using hate speech towards LGBT people. Meanwhile, automated weapons programmed to kill are carrying maxims such as “productivity” and “efficiency” into battle. As a result, war has become more sustainable. The proliferation of drone warfare is a very vivid example of these new forms of conflict. They create a virtual reality that is almost absent from our grasp. But it would be comical to depict AI as an inevitable Orwellian nightmare of an army of super-intelligent “Terminators” whose mission is to erase the human race.


The robots are ready – how can business leaders take the leap?

Robots and intelligent technology can now optimise something we’ve never been able to before: the bandwidth of employees. This has become increasingly more critical as staff adjust to remote working. By onboarding these new tools and incorporating them into the workforce, businesses can empower their staff to do more. They can automate mundane and repetitive tasks extremely quickly, giving their human colleagues more time to take on problem-solving and time-consuming tasks. In fact, 4 in 5 employees that use robots and digital workers say they have been beneficial with efficiency and collaboration, and are useful in easing the burden of administrative tasks. Employees have found that a ‘robotic helping hand’ has been most appreciated for sorting data and documents, providing prompts for pending tasks, and digitising paperwork. What’s also clear is that some businesses do have the right tools in place to help. In fact, half of UK employees said processes helped them do their job faster and collaborate better, both critical during the pandemic. However, for business leaders, the pressure to get automation right is huge. It’s a major investment of time, money, and energy for everyone involved. 


Why process mining is seeing triple-digit growth

Many enterprises are finding it difficult to scale beyond a few software robots or bots because they are automating a bad process that cannot scale. “Most businesses are automating processes through RPA and hyperautomation without first fully understanding their data and processes,” explained Gero Decker, CEO of Signavio, a SAP spinoff focused on business transformation. As enterprises pursue increased efficiencies, there is debate about whether it makes more sense to automate what exists or to fix it first. Automating a bad process may make it faster, but it may also suffer from chokepoints caused by integration with legacy systems or approval processes. Process mining can help a company fix a bad process first. Chris Nicholson, CEO of Pathmind, a company applying AI to industrial operations, argues, “The main challenge to overcome before applying process automation is to standardize the current processes performed by people. If they are not standardized, there can be no automation.” With process mining, companies can see whether their current processes are standardized so they know which problem they have to solve first: standardization or automation.


Sophisticated cybersecurity threats demand collaborative, global response

The cybersecurity industry has long been aware that sophisticated and well-funded actors were theoretically capable of advanced techniques, patience, and operating below the radar, but this incident has proven that it isn’t just theoretical. We believe the Solorigate incident has proven the benefit of the industry working together to share information, strengthen defenses, and respond to attacks. Additionally, the attacks have reinforced two key points that the industry has been advocating for a while now—defense-in-depth protections and embracing a zero trust mindset. Defense-in-depth protections and best practices are really important because each layer of defense provides an extra opportunity to detect an attack and take action before they get closer to valuable assets. We saw this ourselves in our internal investigation, where we found evidence of attempted activities that were thwarted by defense-in-depth protections. So, we again want to reiterate the value of industry best practices such as outlined here, and implementing Privileged Access Workstations (PAW) as part of a strategy to protect privileged accounts.


AI Transformation in 2021: In-Depth guide for executives

AI transformation touches all aspects of the modern enterprise including both commercial and operational activities. Tech giants are integrating AI into their processes and products. For example, Google is calling itself an “AI-first” organization. Besides tech giants, IDC estimates that at least 90% of new organizations will insert AI technology into their processes and products by 2025. ... First few projects should create measurable business value while being attainable. This is important for the transformation to gain trust across the organization with achieved projects and it creates momentum that will lead to AI projects with greater success. These projects can rely on AI/ML powered tools in the marketplace or for more custom solutions, your company can run a data science competition and rely on the wisdom of hundreds of data scientists. These competitions use encrypted data and provide a low cost way to find high performing data science solutions. bitgrit is a company that helps companies identify AI use cases and run data science competitions. Implementing process mining tools is one of those easy-to-achieve and impactful projects. For example, QPR’s Process Analyzer tool has an extensive set of ready-to-use process mining analyses, including ready-to-use clustering analysis and process predictions, as well as a platform for machine learning based analyses.


Microsoft Says It's Time to Attack Your Machine-Learning Models

Machine-learning researchers are focused on attacks that pollute machine learning data, epitomized by presenting two seemingly-identical image of, say, a tabby cat, and having the AI algorithm identify it as two completely different things, he said. More than 2,000 papers have been written in the last few years, citing these sorts of examples and proposing defenses, he said. "Meanwhile, security professionals are dealing with things like SolarWinds, software updates and SSL patches, phishing and education, ransomware, and cloud credentials that you just checked into Github," Anderson said. "And they are left to wonder what the recognition of a tabby cat has to do with the problems they are dealing with today." ... Anderson shared a red team exercise conducted by Microsoft where the team aimed to abuse a Web portal used for software resource requests and the internal machine-learning algorithm that determines automatically to which physical hardware it assigns a requested container or virtual machine. The red team started with credentials for the service, under the assumption that attackers will be able to gather valid credentials - either by phishing or because an employee reuses their user name and password.


Microsoft: Office 365 Was Not SolarWinds Initial Attack Vector

In its Thursday blog, the Microsoft team says the compromise techniques leveraged by the SolarWinds hackers included "password spraying, spear-phishing and use of webshell through a web server and delegated credentials." Earlier this week, acting CISA Director Brandon Wales told The Wall Street Journal that the SolarWinds cyberespionage operation gained access to targets using a multitude of methods, including password spraying and through exploits of vulnerabilities in cloud software (see: SolarWinds Hackers Cast a Wide Net). "As part of the investigative team working with FireEye, we were able to analyze the attacker’s behavior with a forensic investigation and identify unusual technical indicators that would not be associated with normal user interactions. We then used our telemetry to search for those indicators and identify organizations where credentials had likely been compromised by the [SolarWinds hackers]," Microsoft's security team says. But Microsoft says it's found no evidence that the SolarWinds hackers used Office 365 as an attack vector. "We have investigated thoroughly and have found no evidence they [SolarWinds] were attacked via Office 365," the Microsoft researchers say. "The wording of the SolarWinds 8K filing was unfortunately ambiguous, leading to erroneous interpretation and speculation, which is not supported by the results of our investigation."


Data loss prevention strategies for long-term remote teams

For many, a distributed hybrid workforce is the new normal, vastly expanding their threat landscape and making it more challenging to secure data and IT infrastructure. In this environment, companies need to pivot their defensive capacity, ensuring that they are prepared to meet the moment (i.e., the threats). When considering cybersecurity threats, we often think of shady cybercriminals or nation-states hacking company networks. After all, when these incidents occur, they make worldwide news headlines. For most companies, however, external bad actors aren’t the most critical risk. A company’s employees often pose a more prominent and – luckily – a more manageable cybersecurity threat. IBM estimates that human error causes nearly a quarter of all data breaches. Additionally, employees commonly and inadvertently compromise company data through poor password hygiene, accidental data sharing, improper technology use, phishing scams, and more. Some employees will also act maliciously, intentionally stealing company data for profit, retribution, or fun. The market for sensitive data is so prolific that some cybersecurity experts predict the emergence of insiders-as-a-service as bad actors capitalize on remote work trends to infiltrate companies.


The Rise of Responsible AI

In Public Safety arena using biased data to train the AI to identify criminals using cyber forensics can lead to the wrongful conviction of innocent people as the output of the software was influenced by racial and ethnicity data points introduced as either the code used was not tested properly or used wrong data sets for testing resulting in destroying lives. Apart from the bias in the data set we have also seen that during any application or transactional data processing there is no transparency as to find out why this decision was taken, which parameter influenced it and why did the algorithm took additional steps to mitigate it? All these can be easily answered by embedding explainability and transparency in the AI design processes to provide understandability of the context and interpretability of the decision by AI. Thus we need Responsible AI which is the practice of using AI with good intention to empower employees and businesses, and fairly impact customers and society – allowing companies to engender trust and scale AI with confidence along with the purpose of providing a framework to ensure the ethical, transparent and accountable use of AI technologies consistent with user expectations, organizational values and societal laws and norms.


Adaptive Frontline Incident Response: Human-Centered Incident Management

Many companies struggle with defining an incident. To us, an incident is when a service or feature functionality is degraded. But defining "degraded" contains a multitude of possibilities. One could say "degraded" is when something isn’t working as expected. But what if it’s better than expected? What’s the expected behavior? Do you define it based on customer impact? Do you wait until there’s customer impact to declare an issue an incident? This is where having a common and shared understanding of the normal operating behavior of the system and formalizing these in feature/service level objectives and indicators are key. We have to know what we expect, to know when a degradation becomes an incident. But, defining service level objectives for legacy services already in operation takes a significant investment of time and energy that might not be available right now. That’s the reality in which we frequently operate, trading off efficiency with thoroughness, as Hollnagel (2009) points out. We handle this tradeoff with a governing set of generic thresholds to fill in for services without clear indicators. At Twilio we have a lot of products, running the gamut from voice calls, video conferencing, and text messages, to email and two factor authentication.



Quote for the day:

"Don't look back. Something might be gaining on you." -- Satchel Paige

Daily Tech Digest - June 15, 2020

Can I read your mind? How close are we to mind-reading technologies?

Technology nowadays is already heavily progressing in artificial intelligence, so it doesn’t seem too farfetched. Humans have already developed brain-computer interface (BCI) technologies that can safely be used on humans. ... How would the government play a role in these mind-reading technologies? How would it effect the eligibility of use of the technology? Don’t you think some unethical play would be prevalent, because I sure do. I’m not very ethically inclined to believe these companies aren’t sending our data to other companies without our consent. I found this term “Neurorights” in a Vox article, “Brain-reading tech is coming. The law is not ready to protect us” written by Sigal Samuel. It’s a good read, and I think she demonstrates well into the depth of how this would impact society from a privacy concern standpoint. She discusses having 4 core new rights protected within the law: The right to your cognitive library, mental privacy, mental integrity, and psychological continuity. She mentions, “brain data is the ultimate refuge of privacy”. Once it’s collected, I believe you can’t get it back. There needs to be strict laws enforced if this were to become a ubiquitous technology.


It's The End Of Infrastructure-As-A-Service As We Know It: Here's What's Next

Containers are the next step in the abstraction trend. Multiple containers can run on a single OS kernel, which means they use resources more efficiently than VMs. In fact, on the infrastructure required for one VM, you could run a dozen containers. However, containers do have their downsides. While they're more space efficient than VMs, they still take up infrastructure capacity when idle, running up unnecessary costs. To reduce these costs to the absolute minimum, companies have another choice: Go serverless. The serverless model works best with event-driven applications — applications where a finite event, like a user accessing a web app, triggers the need for compute. With serverless, the company never has to pay for idle time, only for the milliseconds of compute time used in processing a request. This makes serverless very inexpensive when a company is getting started at a small volume while also reducing operational overhead as applications grow in scale. Transitioning to containerization or a serverless model requires major changes to your IT teams' processes and structure and thoughtful choices about how to carry out the transition itself.


9 Future of Work Trends Post-COVID-19

Before COVID-19, critical roles were viewed as roles with critical skills, or the capabilities an organization needed to meet its strategic goals. Now, employers are realizing that there is another category of critical roles — roles that are critical to the success of essential workflows. To build the workforce you’ll need post-pandemic, focus less on roles — which group unrelated skills — than on the skills needed to drive the organization’s competitive advantage and the workflows that fuel that advantage. Encourage employees to develop critical skills that potentially open up multiple opportunities for their career development, rather than preparing for a specific next role. Offer greater career development support to employees in critical roles who lack critical skills. ... After the global financial crisis, global M&A activity accelerated, and many companies were nationalized to avoid failure. As the pandemic subsides, there will be a similar acceleration of M&A and nationalization of companies. Companies will focus on expanding their geographic diversification and investment in secondary markets to mitigate and manage risk in times of disruption. This rise in complexity of size and organizational management will create challenges for leaders as operating models evolve.


South African bank to replace 12m cards after employees stole master key

"According to the report, it seems that corrupt employees have had access to the Host Master Key (HMK) or lower level keys," the security researcher behind Bank Security, a Twitter account dedicated to banking fraud, told ZDNet today in an interview. "The HMK is the key that protects all the keys, which, in a mainframe architecture, could access the ATM pins, home banking access codes, customer data, credit cards, etc.," the researcher told ZDNet. "Access to this type of data depends on the architecture, servers and database configurations. This key is then used by mainframes or servers that have access to the different internal applications and databases with stored customer data, as mentioned above. "The way in which this key and all the others lower-level keys are exchanged with third party systems has different implementations that vary from bank to bank," the researcher said. The Postbank incident is one of a kind as bank master keys are a bank's most sensitive secret and guarded accordingly, and are very rarely compromised, let alone outright stolen.


What matters most in an Agile organizational structure

An Agile organizational strategy that works for one organization won't necessarily work for another. The chapter excerpt includes a Spotify org chart, which the authors describe as, "Probably the most frequently emulated agile organizational model of all." But an Agile model that serves as a standard of success won't necessarily replicate to another organization well. Agile software developers aim to better meet customer needs. To do so, they need to prioritize, release and adapt software products more easily. Unlike the Spotify-inspired tribe structure, Agile teams should remain located closely to the operations teams that will ultimately support and scale their work, according to the authors. This model, they argue in Doing Agile Right, promotes accountability for change, and willingness to innovate on the business side. Any Agile initiative should follow the sequence of "test, learn, and scale." People at the top levels must accept new ideas, which will drive others to accept them as well. Then, innovation comes from the opposite direction. "Agile works best when decisions are pushed down the organization as far as possible, so long as people have appropriate guidelines and expectations about when to escalate a decision to a higher level."


What is process mining? Refining business processes with data analytics

Process mining is a methodology by which organizations collect data from existing systems to objectively visualize how business processes operate and how they can be improved. Analytical insights derived from process mining can help optimize digital transformation initiatives across the organization. In the past, process mining was most widely used in manufacturing to reduce errors and physical labor. Today, as companies increasingly adopt emerging automation and AI technologies, process mining has become a priority for organizations across every industry. Process mining is an important tool for organizations that are committed to continuously improving IT and business processes. Process mining begins by evaluating established IT or business processes to find repetitive tasks that can by automated using technologies such as robotic process automation (RPA), artificial intelligence and machine learning. By automating repetitive or mundane tasks, organizations can increase efficiency and productivity — and free up workers to spend more time on creative or complex projects. Automation also helps reduce inconsistencies and errors in process outcomes by minimizing variances. Once an IT or business process is developed, it’s important to consistently check back to ensure the process is delivering appropriate outcomes — and that’s where process mining comes in.


How to improve cybersecurity for artificial intelligence

One of the major security risks to AI systems is the potential for adversaries to compromise the integrity of their decision-making processes so that they do not make choices in the manner that their designers would expect or desire. One way to achieve this would be for adversaries to directly take control of an AI system so that they can decide what outputs the system generates and what decisions it makes. Alternatively, an attacker might try to influence those decisions more subtly and indirectly by delivering malicious inputs or training data to an AI model. For instance, an adversary who wants to compromise an autonomous vehicle so that it will be more likely to get into an accident might exploit vulnerabilities in the car’s software to make driving decisions themselves. However, remotely accessing and exploiting the software operating a vehicle could prove difficult, so instead an adversary might try to make the car ignore stop signs by defacing them in the area with graffiti. Therefore, the computer vision algorithm would not be able to recognize them as stop signs. This process by which adversaries can cause AI systems to make mistakes by manipulating inputs is called adversarial machine learning.


Using a DDD Approach for Validating Business Rules

For modeling commands that can be executed by clients, we need to identify them by assigning them names. For example, it can be something like MakeReservation. Notice that we are moving these design definitions towards a middle point between software design and business design. It may sound trivial, but when it’s specified, it helps us to understand a system design more efficiently. The idea connects with the HCI (human-computer interaction) concept of designing systems with a task in mind; the command helps designers to think about the specific task that the system needs to support. The command may have additional parameters, such as date, resource name, and description of the usage. ... Production rules are the heart of the system. So far, the command has traveled through different stages which should ensure that the provided request can be processed. Production rules specified the actions the system must perform to achieve the desired state. They deal with the task a client is trying to accomplish. Using the MakeReservation command as a reference, they make the necessary changes to register the requested resource as reserved.


7 Ways to Reduce Cloud Data Costs While Continuing to Innovate

This is a difficult time for enterprises, which need to tightly control costs amid the threat of a recession while still investing sufficiently in technology to remain competitive. ... This is especially true of analytics and machine learning projects. Data lakes, ideally suited for machine learning and streaming analytics, are a powerful way for businesses to develop new products and better serve their customers. But with data teams able to spin up new projects in the cloud easily, infrastructure must be managed closely to ensure every resource is optimized for cost and every dollar spent is justified. In the current economic climate, no business can tolerate waste. But enterprises aren’t powerless. Strong financial governance practices allow data teams to control and even reduce their cloud costs while still allowing innovation to happen. Creating appropriate guardrails that prevent teams from using more resources than they need and ensuring workloads are matched with the correct instance types to optimize savings will go a long way to reducing waste while ensuring that critical SLAs are met.


Who Should Lead AI Development: Data Scientists or Domain Experts?

To lead these efforts ethically and effectively, Chraibi suggested data scientists such as himself should be the driving force. “The data scientists will be able to give you an insight into how bad it will be using a machine-learning model” if ethical considerations are not taken into account, he said. But Paul Moxon, senior vice president for data architecture at Denodo Technologies, said his experience working with AI development in the financial sector has given him a different perspective. “The people who raised the ethics issues with banks—the original ones—were the legal and compliance team, not the technologists,” he said. “The technologists want to push the boundaries; they want to do what they’re really, really good at. But they don’t always think of the inadvertent consequences of what they’re doing.” In Moxon’s opinion, data scientists and other technology-focused roles should stay focused on the technology, while risk-centric roles like lawyers and compliance officers are better suited to considering broader, unintended effects. “Sometimes the data scientists don’t always have the vision into how something could be abused. Not how it should be used but how it could be abused,” he said.



Quote for the day:

"Only the disciplined ones in life are free. If you are undisciplined, you are a slave to your moods and your passions." -- Eliud Kipchoge

Daily Tech Digest - December 29, 2019

Are we running out of time to fix aviation cybersecurity?

cockpit airline airplane control pilot by southerlycourse getty
Flying remains one of the safest ways to travel, and that's due in large part to continuous efforts to improve air safety. Cultural norms in aviation have rewarded and incentivized a whistleblowing culture, where the lowliest mechanic can throw a red flag and stop a jet from taking off if he notices a potential safety issue. Contrast that with the often-fraught issue of reporting security vulnerabilities, where shame and finger-pointing and buck passing are the norm. The report highlights the problem, writing, "Across much of the cybersecurity landscape, there arguably remains a stigma about discussing cybersecurity vulnerabilities and challenges that go beyond managing sensitive vulnerabilities." A wormable exploit or a backdoored software update — like the backdoored MeDoc software update that started the Petya worm — could cause safety issues at scale. It’s unclear that the aviation industry’s traditional safety thinking is sufficient to meet this challenge. For instance, the report calls out the need for greater information sharing on aviation cybersecurity threats, acknowledging the risk of a Maersk-like scenario and observing rather drily that "other sectors have seen the scale and costs from a single vulnerability and 'wormable' exploit.



AI vs. Machine Learning: Which is Better?


Artificial intelligence came from the word “Artificial” and “Intelligence. Artificial means it is created by a non-natural thing or a human, and intelligence means the ability to think and understand things. Some people think artificial intelligence is a system, but the fact is, it’s within the system. AI has stipulated rules that were pre-determined by an algorithm that was set by a person. The appearance of AI is more often on smartphones, desktop computers, and smartwatch. ... Machine learning is capable of learning from itself. It is a computer system than can adopt knowledge and solve a problem based on its experience. The ML acts on the provided data that was inputted by humans and predicted accurate solutions based on the information that was gathered by the machine/computer. Machine learning has a different algorithm for artificial intelligence. The machine learning algorithm is capable of deciding on its own. The artificial intelligence is capable of answering a pre-determined question with a pre-determined solution.


How AI, Analytics & Blockchain Empower Efficient & Intelligent Supply Chain?


Regulating its promise to disrupt every industry for better, AI is transforming supply chain management as well. The technology has a number of applications in the supply chain which include extraction of information, analysis of data, planning for supply and demand, and better management for autonomous vehicles and warehouses. AI-enabled NLP scans through the supply chain documents like contracts, purchase orders, chat logs with customers or suppliers and significant others to identify commonalities which are used as feedback to optimize SCM as part of continual improvement. ML helps people manage the flow of goods throughout the supply chain while ensuring that raw materials and products are in the right place at the right time. Also, technology can source and process data from different areas and forecast future demand based on external factors. And most importantly, AI helps analyze warehouse processes and optimize the sending, receiving, storing, picking and management of individual products.


Netgear Nighthawk M2 Mobile Router, hands on

netgear-m2-on-table.jpg
Very much designed as a 'travel router' the square lozenge of the M2 measures 105mm by 105mm by 20.5mm and weighs 240g. It's easy to slip into a briefcase or bag when you're travelling, and won't weigh you down.  Like its M1 predecessor, the M2 relies on 4GX LTE mobile broadband, as Netgear argues that 5G networks aren't sufficiently widespread to justify the extra cost of adding 5G support. However, Category 20 4GX LTE support means that the M2 doubles its maximum download speed from 1Gbps to 2Gbps, although the upload speed remains the same at 150Mbps. It then uses dual-band 802.11ac to create its own wi-fi network, which can support connections from up to 20 separate devices. The M2 also gains a larger 2.4-inch touch-sensitive display that allows you to quickly configure the router, and to monitor signal strength, data usage and other settings. The Netgear Mobile app provides similar controls for Android and iOS devices, and there's a browser interface available for computers as well.


What is Jenkins? The CI server explained

What is Jenkins? The CI server explained
Today Jenkins is the leading open-source automation server with some 1,400 plugins to support the automation of all kinds of development tasks. The problem Kawaguchi was originally trying to solve, continuous integration and continuous delivery of Java code (i.e. building projects, running tests, doing static code analysis, and deploying) is only one of many processes that people automate with Jenkins. Those 1,400 plugins span five areas: platforms, UI, administration, source code management, and, most frequently, build management. Jenkins is available as a Java 8 WAR archive and installer packages for the major operating systems, as a Homebrew package, as a Docker image, and as source code. The source code is mostly Java, with a few Groovy, Ruby, and Antlr files. You can run the Jenkins WAR standalone or as a servlet in a Java application server such as Tomcat. In either case, it produces a web user interface and accepts calls to its REST API. When you run Jenkins for the first time, it creates an administrative user with a long random password, which you can paste into its initial webpage to unlock the installation.


Process Mining vs. Business Process Discovery

Harvard Business Review – a publication that’s unfortunately becoming increasingly political by the day – published an article about process mining earlier this year, written by two individuals who have been involved with the field for four decades now. According to the experts, process mining solves a few fundamental challenges associated with business process management. These are: Companies tend to spend too little time or too much time analyzing “as is” business processes; and There is a lack of connections between business processes and an organization’s enterprise information systems. Starting with the first bullet point, we’d argue that for most companies, if you can’t figure out the optimal time to spend analyzing an existing business process, hire better BPAs. The second bullet point describes the inability to capture the “interoperability’ of processes and information systems. Fair enough. Organizations are incredibly complex entities, and one department might interact with 100s of internal systems. Enter process mining and a German company called Celonis.


Azure Cosmos DB — A to Z


Behind the scenes, Cosmos DB uses distributed data algorithm to increase the RUs/performance of the database, every container is divided into logical partitions based on the partition key. The hash algorithm is used to divide and distribute the data across multiple containers. Further, these logical containers are mapped with multiple physical containers (hosted on multiple servers). Placement of Logical partitions over physical partitions is handled by Cosmos DB to efficiently satisfy the scalability and performance needs of the container. As the RU needs increase, it increases the number of Physical partitions (More Servers). As the best practice, you must choose a partition key that has a wide range of values and access patterns that are evenly spread across logical partitions. For example, if you are collecting some data from multiple schools, but 75% of your data is collected from one school only, then, it’s not a good idea to create the school as the partition key.


Digital process automation vs. robotic process automation

Digital process automation can also be easily confused with another similar term: robotic process automation. Robotic process automation (RPA) uses more intelligent automation technology -- such as artificial intelligence (AI) and machine learning (ML) -- to handle high-volume, repeatable tasks. RPA can be used to automate queries and calculations as well as maintain records and transactions. This is typically done using bots such as probots, knowbots or chatbots. What distinguishes RPA from other forms of IT automation is the ability of the RPA software to be aware and adapt to changing circumstances, exceptions and new situations. Whereas DPA comes from BPM, and BPA comes from infrastructure management, RPA is not considered a part of the infrastructure. Instead, RPA sits on top of an organization's infrastructure; this allows an organization to quickly implement a digital process technology.


Data governance & retention in your Microsoft 365 tenant

Image showing workers in an office.
Data governance has relied on transferring data to a third-party for hosting an archive service. Emails, documents, chat logs, and third-party data (Bloomberg, Facebook, LinkedIn, etc.) must be saved in a way that it can’t be changed and won’t be lost. Data governance is part of IT at the enterprise level. It serves regulatory compliance, can facilitate eDiscovery, and is part of a business strategy to protect the integrity of the data estate. However, there are downsides. In addition to acquisition costs, the archive is one more system that needs ongoing maintenance. When data is moved to another system, the risk footprint is increased, and data can be compromised in transit. An at-rest archive can become another target of attack. When you take the data to the archive, you miss the opportunity to reason over it with machine learning to extract additional business value and insights to improve the governance program. The game changer is to have reliable, auditable retention inside the Microsoft 365 tenant.


Top 6 Software Testing Trends to Look Out in 2020

Despite the promising prospects of AI/ML application in software testing, experts still regard AI/ML in testing is still in its infancy stage. Therefore, it remains numerous challenges for the applications of AI/ML in testing to move on to the maturity level. The rising demands for AI in testing and QA teams signal that it’s time for Agile teams to acquire AI-related skill sets, including onboarding data science, statistics, mathematics. These skill sets will be the ultimate complementation to the core domain skills in test automation and software development engineering testing (SDET). Additionally, successful testers need to adopt a combination of pure AI skills and non-traditional skills. Indeed, last year, a variety of new roles have been introduced such as AI QA analyst or test data scientist. As for automation tool developers, they should focus on building tools that are practical. Companies are utilizing PoCs and reassessing options to make the best use of AI and considering budgets.



Quote for the day:


"Everyone wants to be appreciated. So if you appreciate someone, don't keep it a secret." -- Mary Kay Ash