Daily Tech Digest - April 20, 2021

How to write a cyberthreat report executives can really use

Although there’s no single template for crafting a threat report, “it should look like whatever you think people will read," says deGrazia. "Senior managers get hit with lots and lots of paper, so whatever format it’s in, it has to get their attention.” CISOs also need to consider how often they want to generate these reports. Security leaders say the reports should come out on a regular schedule, whether they’re passed out weekly as Stebila did, monthly, or quarterly. The best schedule is one that matches the organization’s own cultural tempo, Rawlins says, adding that CISOs could also create and distribute customized reports to different recipients on different schedules based on the varying levels of threats and interest levels each party has. CISOs could, for example, share reports weekly with their CIOs but distribute them to the board only semi-annually. That regular schedule should not preclude sending out threat reports in response to urgent issues, security experts say. “You can’t ignore the fact that things come up, and come up quickly, and those things need to be communicated up the chain as quickly as possible,” deGrazia adds.


Consumer data protection is a high priority, but there’s still work to be done

“Most pertinently, it’s encouraging that consumer data protection is such a high priority for organizations, but there is clearly some work to be done in turning that priority into a reality in terms of what data is actually encrypted and at what points in the data lifecycle. It’s also apparent that organizations of all shapes and sizes are looking to adopt encryption for a range of new and cutting-edge use cases, which will no doubt continue to drive innovation in the industry.” “IT is tasked with deploying, tracking and managing encryption and security policy across on-premise, cloud, multi-cloud and hybrid environments, for an expanding array of uses cases, and amidst widening threats. Encryption is essential for protecting company and customer data, but managing encryption and protecting the associated secret keys are rising pain points as organizations engage multiple cloud services for critical functions,” added Grimm. “Rising use of HSMs for encryption and key management shows that IT is starting the meet these challenges. Organizations will benefit from a growing and ecosystem of integrated solutions for cloud security policy management ...”


The security impact of shadow IT

Shadow IT is also one of the reasons why strict compliance-based approaches to cyber security can only help you so far. If you are measuring patching of your internal systems as a security key performance indicator (KPI), for example, then you need to be conscious that if you have a 99% success rate at patching servers, an adversary will probably find that 1% of servers you have not patched. And if you have a 100% success rate at patching servers, you absolutely have to make sure that every server that exists is part of that measurement – if you have a server which is not enrolled in asset management and therefore not monitored in patch management processes, you could still be exposed and not be aware of it. We talk about the “advanced persistent threat” a lot in security, and it is easy to get hung up on the “advanced” part of that epithet. Although “advanced” is dangerous, what we should be most concerned about is “persistent”. You may have thousands of servers properly enrolled in your technical controls, fully-monitored and fully-patched – and one undocumented server which is not patched and not monitored.


Business Process Automation at Scale Is Key to Customer and Employee Experience

“The electronic signature is often where folks start,” asserted Casey of DocuSign. “I think that’s wonderful, obviously. But we have also started to step back and think about the systems of agreement that businesses have as a whole.” ... “Sure, automation will always cut costs—but we want to consider the experience. That’s what’s durable,” said Casey. During COVID, short-term fixes on the ground were prioritized over long-term solutions with high-level, lasting impacts. Now, the tide is beginning to shift. The benefits of full-scale automation—like better customer experiences, business agility, increased productivity, and greater security—are clearer than ever before. But what does strategic end-to-end automation look like in practice? ... Automating at scale is both technical science and change management art. For instance, close to 50% of businesses today claim that they are prepared to invest in an automated, end-to-end contract management solution, but simply don’t have the tools or know-how to do it effectively. “The problem is that end-to-end automation requires a lot of technology,” said Koplowitz.


The clash over the hybrid workplace experience

To optimize the employee experience of their hybrid workforce, employers should focus on "digital parity" as well as employee "experience parity," according to IDC. Digital parity refers to the requirement that all workers have secure access to the resources required to do their jobs, no matter their preferred device or location (office/remote/in the field). Experience parity means a democratized workplace, where all employees have the opportunity to collaborate, learn, develop, innovate and succeed, the report said. ... "Businesses everywhere must place a greater priority on enhancing employee experiences, which in turn will drive higher productivity, collaboration and better customer outcomes," said Leon Gilbert, senior vice president and general manager, Digital Workplace Services, Unisys, in a statement. "Organizations that adapt to provide digital and experience parity will not only retain employees in a competitive marketplace but will also empower those employees to provide the best service possible to their organization's customers. Do it well and you drive engagement, productivity and adaptability as new workforce demands emerge."


TCP/IP stack vulnerabilities threaten IoT devices

The actual danger to which an organization is exposed differs based on which of the vulnerable stacks it’s using. The FreeBSD vulnerability is likely more widespread – it affects millions of IT networks, including Netflix and Yahoo, as well as traditional networking devices like firewalls and routers, according to the report, but is likely easier to fix. “Those are manageable systems – we should be able to update them,” said Forrester senior analyst Brian Kime. “[And] they should be prioritized for remediation, because they’re part of your network stack.” The same cannot be said, in many cases, of the real-time operating systems affected by Name:Wreck, since the standard issues that make securing IoT devices remain in play here. The ability to patch and update firmware is still not a standard feature, and the OEMs of connected devices – which may be quite old, and may not have been designed to be Internet-facing in the first place – might not even be operating any more. In cases where those IoT devices are vulnerable, strong security has to start at the network layer, according to Hanselman. Monitoring the network directly for anomalous activity – which, again, can sometimes be difficult to detect in the case of a TCP/IP vulnerability – is a good start, but what’s really needed is techniques like DNS query protection.


The Four Fs of employee experience

To deliver an optimal employee experience (EX), we recommend focusing on four principles that we call the Four Fs. They are a set of heuristics inspired by the user-centric, iterative practice of design thinking, and they rest on the idea that your business goals, experiences, and technology are inseparable from one another and must be addressed in a unified, cross-company way. We refer to this approach as BXT (for business, experience, and technology). When applied to EX, the Four Fs unlock productivity and cut down on energy-sapping frustration stemming from internal systems and tools. They are the form, flow, feeling, and function of an employee’s work life. ... Employees can’t do their jobs well if they don’t understand what is being asked of them, the purpose of the work, or how they should prioritize their tasks. A firm we advised recently had received feedback from staff that the online training module for a new marketing curriculum it had developed was hard to follow and a bad experience overall. To address the problem, the company’s user experience team worked with PwC and a leading software firm to reimagine the employee learning interface.


Building a learning culture that drives business forward

We all think we have it. So we might say, “I’m a fast learner” or “I’m a slow learner” or “I learn in this way or that way.” But, actually, a lot of the underlying research—there are several strands of research—shows that people can actually build skills to learn new skills. We think of this as one of the most fundamental capabilities that a person can develop for themselves. It makes you better at getting better at things. It makes you better able to adapt to the changing environment that we all face these days. This idea of learning as a skill, in and of itself, is a fundamental one, and one that we talk to a lot of our clients about and, frankly, a lot of our colleagues as well. Because they’re also curious. They want to learn. But they need to be taught. Back in school, you might have thought about this as study skills. How do I organize myself in order to get my schoolwork done? But there’s a much more sophisticated version of that when you think about adult learners that I think we all need to invest in more. ... Learning to follow is listening before talking and learning how to be a contributor so that you can then lead. There are a few ways you can learn how to follow.


Concerns grow over digital threats faced from former employees

"A lot of companies fail to have clear policies or a checklist that employers use for post-employee separation. This is extremely important because failing to do so is going to involve a lot of things but the most important thing is that you want to make sure that the former employee or even a subcontractor that previously had access to the organization's technologies and systems is completely locked out," Guccione said in an interview. "It's going to avoid the risk of business disruption. It's going to avoid the risk of the leakage of intellectual property or trade secrets. It also mitigates legal risk because what you don't want is any exposure of or unauthorized access to sensitive data about the organization or its stakeholders. If a door is left open to a former member of the team and that person is disgruntled, you could have a real problem on your hands." ... In December, the Justice Department announced that a former Cisco worker was sentenced to two years in prison after he accessed the Cisco Systems cloud infrastructure that was hosted by Amazon Web Services and deleted 456 virtual machines for Cisco's Webex Teams application.


MLOps, An Insider’s Perspective: Interview With Nikhil Dhawan

Large tech firms have used data science and its various techniques to learn about consumer behaviour for a long time. They have optimised their recommendation engines, have bundled products together, improved targeting for the right customers, increased the basket size and so on. They had the budget to dedicate resources for research, partnership with academic institutes that focused highly on statistical knowledge and theory. They also had a significant engineering function to build infrastructure and tooling required to build on research outcomes. Smaller or business-focused firms don’t have this luxury. There is a big task list on any data science project that ranges from data acquisition, data ingestion, determining or starting with initial algorithms, testing multiple variants including tuning the model and hyperparameters, preparation of the datasets for each experiment, validating and comparing the outputs etc. Finally, once we get the best possible trained model, the engineering task is to deploy the model to score or predict on live data to improve business functions.



Quote for the day:

"A leader does not deserve the name unless he is willing occasionally to stand alone." -- Henry A. Kissinger

Daily Tech Digest - April 19, 2021

Time to Modernize Your Data Integration Framework

You need to be able to orchestrate the ebb and flow of data among multiple nodes, either as multiple sources, multiple targets, or multiple intermediate aggregation points. The data integration platform must also be cloud native today. This means the integration capabilities are built on a platform stack that is designed and optimized for cloud deployments and implementation. This is crucial for scale and agility -- a clear advantage the cloud gives over on-premises deployments. Additionally, data management centers around trust. Trust is created through transparency and understanding, and modern data integration platforms give organizations holistic views of their enterprise data and deep, thorough lineage paths to show how critical data traces back to a trusted, primary source. Finally, we see modern data analytic platforms in the cloud able to dynamically, and even automatically, scale to meet the increasing complexity and concurrency demands of the query executions involved in data integration. The new generation of some data integration platforms also work at any scale, executing massive numbers of data pipelines that feed and govern the insatiable appetite for data in the analytic platforms.


Will codeless test automation work for you?

While outsiders view testing as simple and straightforward, it's anything but true. Until as recently as the 1980s, the dominant idea in testing was to do the same thing repeatedly and write down the results. For example, you could type 2+3 onto a calculator and see 5 as a result. With this straightforward, linear test, there are no variables, looping or condition statements. The test is so simple and repeatable, you don't even need a computer to run this test. This approach is born from thinking akin to codeless test automation: Repeat the same equation and get the same result each time for every build. The two primary methods to perform such testing are the record and playback method, and the command-line test method. Record and playback tools run in the background and record everything; testers can then play back the recording later. Such tooling can also create certification points, to check the expectation that the answer field will become 5. Record and playbook tools generally require no programming knowledge at all -- they just repeat exactly what the author did. It's also possible to express tests visually. Command-driven tests work with three elements: the command, any input values and the expected results.


Ghost in the Shell: Will AI Ever Be Conscious?

It’s certainly possible that the scales are tipping in favor of those who believe AGI will be achieved sometime before the century is out. In 2013, Nick Bostrom of Oxford University and Vincent Mueller of the European Society for Cognitive Systems published a survey in Fundamental Issues of Artificial Intelligence that gauged the perception of experts in the AI field regarding the timeframe in which the technology could reach human-like levels. The report reveals “a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075.” Futurist Ray Kurzweil, the computer scientist behind music-synthesizer and text-to-speech technologies, is a believer in the fast approach of the singularity as well. Kurzweil is so confident in the speed of this development that he’s betting hard. Literally, he’s wagering Kapor $10,000 that a machine intelligence will be able to pass the Turing test, a challenge that determines whether a computer can trick a human judge into thinking it itself is human, by 2029.


Is your technology partner a speed boat or an oil tanker?

The opportunity here really cannot be underestimated. It is there for the taking by organisations who are willing to approach technological transformation in a radically different way. This involves breaking away from monolithic technology platforms, obstructive governance procedures, and the eye-wateringly expensive delivery programmes so often facilitated by traditional large consulting firms. The truth is, you simply don’t need hundreds of people to drive significant change or digital transformation. What you do need is to adopt new technology approaches, re-think operating models and work with partners who are agile experts, who will fight for their clients' best interests and share their knowledge to upskill internal staff. Hand picking a select group of top individuals to work in this way provides a multiplier of value when compared to hiring greater numbers of less experienced staff members. Of course, external partners must be able to deliver at the scale required by the clients they work with. But just as large organisations have to change in order to embrace the benefits of the digital age, consulting models too must adapt to offer the services their clients need at the value they deserve.


Best data migration practices for organizations

the internal IT team needs to work closely with the service provider. To thoroughly understand and outline the project requirements and deliverables. This is to ensure that there is no aspect that is overlooked, and both sides are up to speed on the security and regulatory compliance requirements. Not just the vendor, but the team members and all the tools used in the migration need to meet all the necessary certifications to carry out a government project. Of course, certain territories will have more stringent requirements than others. Finally, an effective transition or change management strategy will be important to complete the transition. Proper internal communications and comprehensive training for employees will help everyone involved be aware of what’s required from them, including grasping any new processes or protocols and circumnavigating any productivity loss during the data migration. While the nitty-gritty of a public sector migration might be similar to a private company’s, a government data migration can be a much longer and unwieldy process, especially with the vast number of people and the copious amounts of sensitive data involved.


Will AI dominate in 2021? A Big Question

Agreeing with the fact that the technologies are captivating us completely with their interesting innovations and gadgets. From Artificial intelligence to machine learning, IoT, big data, virtual and augmented reality, Blockchain, and 5G; everything seems to take over the world way too soon. Keeping it to the topic of Artificial Intelligence, this technology has expanded its grip on our lives without even making us realize that fact. In the days of the pandemic, the IT experts kept working from home and the tech-grounds kept witnessing smart ideas and AI-driven innovations. Artificial Intelligence is also the new normal. Artificial Intelligence is going to be the center of our new normal and it will be driving the other nascent technologies to the point towards success. Soon, AI will be the genius core of automated and robotic operations. In the blink of an eye, Artificial Intelligence can be seen adopted by companies so rapidly and is making its way into several sectors. 2020 has seen this deployment on a wider scale as the AI experts were working from home but the progress didn’t see a stop in the tech fields.


The promise of the fourth industrial revolution

There are some underlying trends in the following vignettes. The internet of things and related technologies are in early use in smart cities and other infrastructure applications, such as monitoring warehouses, or components of them, such as elevators. These projects show clear returns on investment and benefits. For instance, smart streetlights can make residents’ lives better by improving public safety, optimizing the flow of traffic on city streets, and enhancing energy efficiency. Such outcomes are accompanied with data that’s measurable, even if the social changes are not—such as reducing workers’ frustration from spending less time waiting for an office elevator. Early adoption is also found in uses in which the harder technical or social problems are secondary, or, at least, the challenges make fewer people nervous. While cybersecurity and data privacy remain important for systems that control water treatment plants, for example, such applications don’t spook people with concerns about personal surveillance. Each example has a strong connectivity component, too. None of the results come from “one sensor reported this”—it’s all about connecting the dots. 


How Hundred-Year-Old Enterprises Improve IT Ops using Data and AIOps

Sam Chatman, VP of IT Ops at OneMain Financial, explains the impact of levering AIOps is, “Being able to understand what is released, when it’s released, and the potential impacts of that release. We are overcoming alert fatigue, and BigPanda will be our Watson of the Enterprise Monitoring Center (EMC) by automating alerts, opening incident tickets, and identifying those actions to improve our mean time to recovery. This helps us keep our systems up when our users and customers need them to be.” For other organizations, it might help to visualize what naturally happens to IT operations’ monitoring programs over time. Every time systems go down and IT gets thrown under the bus for a major incident, they add new monitoring systems and alerts to improve their response times. As new multicloud, database, and microservice technologies emerged, they add even more monitoring tools and increased observability capabilities. Having more operational data and alerts is a good first step, but then alert fatigue kicks in when tier-one support teams respond and must make sense over dozens to thousands of alerts.


A perfect storm: Why graphics cards cost so much now

Demand for gaming hardware blew up during the pandemic, with everyone bored and stuck at home. In the early days of the lockdowns in the United States and China, Nintendo’s awesome Switch console became red-hot. Even replacement controllers and some games became hard to find. ... Beyond the AMD-specific TSMC logjam, the chip industry in general has been suffering from supply woes. Even automakers and Samsung have warned that they’re struggling to keep up with demand. We’ve heard whispers that the components used to manufacture chips—from the GDDR6 memory used in modern GPUs to the substrate material fundamentally used to construct chips—have been in short supply as well. Seemingly every industry is seeing vast demand for chips of all sorts right now. ... High demand and supply shortages are the perfect recipe for folks looking to flip graphics cards and make a quick buck. The second they hit the streets, the current generation of GPUs were set upon by “entrepreneurs” using bots to buy up stock faster than humans can, then selling their ill-gotten wares for a massive markup on sites like Ebay, StockX, and Craigslist.


How to sharpen machine learning with smarter management of edge cases

Production is when AI models prove their value, and as AI use spreads, it becomes more important for businesses to be able to scale up model production to remain competitive. But as Shlomo notes, scaling production is exceedingly difficult, as this is when AI projects move from the theoretical to the practical and have to prove their value. “While algorithms are deterministic and expected to have known results, real world scenarios are not,” asserts Shlomo. “No matter how well we will define our algorithms and rules, once our AI system starts to work with the real world, a long tail of edge cases will start exposing the definition holes in the rules, holes that are translated to ambiguous interpretation of the data and leading to inconsistent modeling.” That’s much of the reason why more than 90% of c-suite executives at leading enterprises are investing in AI, but fewer than 15% have deployed AI for widespread production. Part of what makes scaling so difficult is the sheer number of factors for each model to consider. In this way, HITL enables faster, more efficient scaling, because the ML model can begin with a small, specific task, then scale to more use cases and situations.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - April 18, 2021

How Can Financial Institutions Prepare for AI Risks?

In exploring the potential risks of AI, the paper provided “a standardized practical categorization” of risks related to data, AI and machine learning attacks, testing, trust, and compliance. Robust governance frameworks must focus on definitions, inventory, policies and standards, and controls, the authors noted. Those governance approaches must also address the potential for AI to present privacy issues and potentially discriminatory or unfair outcomes “if not implemented with appropriate care.” In designing their AI governance mechanisms, financial institutions must begin by identifying the settings where AI cannot replace humans. “Unlike humans, AI systems lack the judgment and context for many of the environments in which they are deployed,” the paper stated. “In most cases, it is not possible to train the AI system on all possible scenarios and data.” Hurdles such as the “lack of context, judgment, and overall learning limitations” would inform approaches to risk mitigation, the authors added. Poor data quality and the potential for machine learning/AI attacks are other risks financial institutions must factor in.


How to turn everyday stress into ‘optimal stress’

What triggers a stress response in one person may hardly register with another. Some people feel stressed and become aggressive, while others withdraw. Likewise, our methods of recovery are also unique—riding a bike, for instance, versus reading a book. Executives, however, aren’t usually aware of their stress-related patterns and idiosyncrasies and often don’t realize the extent of the stress burden they are already carrying. Leadership stereotypes don’t help with this. It’s no surprise that we can’t articulate how stress affects us when we equate success with pushing boundaries to excess, fighting through problems, and never admitting weakness. Many people we know can speak in detail about a favorite vacation but get tongue-tied when asked what interactions consistently trigger stress for them, or what time of day they feel most energized. To reach optimal stress, we need to be conscious of our stress; in neurological terms, it’s the first step toward lasting behavior change. As the psychiatrist and author Daniel Siegel writes, “Where attention goes, neural firing flows and neural connection grows.”9 And it is these newly grown neurological pathways that define our behavior and result in new habits.


How to Empower Transformation and Create ROI with Intelligent Automation

CIOs see ROI delivered in multiple ways. For example, a recent Forrester study identified that Bizagi’s platform offered 288% financial returns. CIOs seek benefits other than cost savings, such as increased net promoter scores, realized upsell opportunities, and improved end-user productivity gains. ... Only that automation sets a very high bar on what machines can perform reliably, especially when employees often interpret automation to mean “without any human involvement.” For example, you can automate many steps in a loan application and its approval processes when the applicant checks all the right boxes. However, most financial transactions have complex exceptions and actions that require orchestration across multiple systems. Managers and employees know the daily complications and oversimplifying their jobs with only rudimentary automations often leads to a backlash from vocal detractors. That’s why CIOs and IT leaders need more than simple task automation, departmental applications, or one-off data analysis. Digital leaders recognize the importance of intelligence and orchestration to modernize workflows, meet customer expectations, leverage machine learning capabilities, and enable implementing of the required business rules.


Understand Bayes’ Theorem Through Visualization

Before going to any definition, normally Bayes’ Theorem are used when we have a hypothesis and we have observed some evidence and we would like to know the probability of the hypothesis holds given that the said evidence is true. Now it may sound a bit confusing, let’s use the above visualization for a better explanation. In the example, we want to know the probability of selecting the female engineer given who has finished Ph.D. education. The first thing we need is the probability of selecting the female engineer from the population without considering any evidence. The term P(H) is called “prior”. ... As we know Bayes’ theorem is branching from Bayesian statistics, which relies on subjective probabilities and uses Bayes’ theorem to update the knowledge and beliefs regarding the events and quantities of interest based on data. Hence, based on some knowledge, we can draw some initial inferences on the system (“prior” in Bayes) and then “update” these inferences based on the data and new data to obtain the “posterior”. Moreover, there are terms like Bayesian inference and frequentist statistical inference, which is not covered in this article. 


Leveraging Geolocation Data for Machine Learning: Essential Techniques

Fortunately, we don’t have to worry about parsing these different formats and manipulating low-level data structures. We can use the wonderful GeoPandas library in Python that makes all this very easy for us. It is built on top of Pandas, so all of the powerful features of Pandas are already available to you. It works with GeoDataFrames and GeoSeries which are “spatially-aware” versions of Pandas DataFrames and Series objects. It provides a number of additional methods and attributes that can be used to operate on geodata within a DataFrame. A GeoDataFrame is nothing but a regular Pandas DataFrame with an extra ‘geometry’ column for every row that captures the location data. Geopandas can also conveniently load geospatial data from all of these different geo file formats into a GeoDataFrame with a single command. We can perform operations on this GeoDataFrame in the same way regardless of the source format. This abstracts away all of the differences between these formats and their data structures.


Why Probability Theory is Hard

First, probability theorists don’t even agree what probability is or how to think about it. While there is broad consensus about certain classes of problems involving coins, dice, coloured balls in perfectly mixed bags and lottery tickets, as soon as we move into practical probability problems with more vaguely defined spaces of outcome, we are served with an ontological omelette of frequentism, Bayesianism, Kolmogorov axioms, Cox’s theory, subjective, objective, outcome spaces and propositional credences. Even if the probationary probability theorist is eventually indoctrinated (by choice or by accident of course instructor) into one or other school, none of these frameworks is conceptually easy to access. Small wonder that so much probabilistic pedagogy is boiled down to methodological rote learning and rules of thumb. There’s more. Probability theory is often not taught very well. The notation can be confusing; and don’t get me started on measure theory. The good news is that in terms of practical applications, very little can get you a very long way. 


Open-source, cloud-native projects: 5 key questions to assess risk

Another important indicator of risk relates to who owns or controls an open-source project. From a risk perspective, projects with neutral governance, where decisions are made by people from a variety of different companies, present a lower risk. The lowest-risk projects are ones that fall under vendor-neutral foundations. Kubernetes has been successful in part because it is shepherded by the Cloud Native Computing Foundation (CNCF). Putting Kubernetes into a neutral foundation provided a level playing field where people from different companies could work together as equals, to create something that benefits the entire ecosystem. The CNCF focuses on helping cloud-native projects set themselves up to be successful with resource documents, maintainer sessions, and help with various administrative tasks. In contrast, open-source projects controlled by a single company have higher risk because they operate at the whims of that company. Outside contributors have little recourse if that company decides to go in a direction that doesn't align with the expectations of the community's other participants. This can manifest as licensing changes, forks, or other governance issues within a project.


Interpreted vs. compiled languages: What's the difference?

In contrast to compiled languages, interpreted languages generate an intermediary instruction set that is not recognizable as source code. The intermediary is not architecture specific as machine code, either. The Java language calls this intermediary form bytecode. This intermediary deployment artifact is platform agnostic, which means it can run anywhere. But one caveat is that each runtime environment needs to have a preinstalled interpreter. The interpreter converts the intermediary code into machine code at runtime. The Java virtual machine (JVM) is the required interpreter that must be installed in any target environment in order for applications packaged and deployed as bytecode to run. The benefit of applications built with an interpreted language is that they can run on any environment. In fact, one of the mantras of the Java language when it was first released was "write once, run anywhere," as Java apps were not tied to any one OS or architecture. The drawback to an interpreted language is that the interpretation step consumes additional clock cycles, especially in comparison to applications packaged and deployed as machine code. 


Disrupting the disruptors: Business building for banks

The strategic target of a new build should be nothing less than radical disruption. Banks should aim not only to expand their own core offerings but also to create a unique combination of products and functionality that will disrupt the market. Successful new launches come with a clear sense of mission and direction, as well as a road map to profitability (see sidebar “Successful business builders are realistic about the journey”). One regional digital attacker in Asia targeted merchant acquiring and developed a network with more than 700,000 merchants. In just four months, it created a product with the capacity to process payments through QR codes at the point-of-sale systems of the two main merchant acquirers in the region and to transfer money between personal accounts. In another case, an incumbent bank launched a state-of-the-art digital solution in just ten months. In China, a leading global bank launched a digital-hybrid business that focuses on financial planning and uses social media to connect with customers. A midsize Asian bank, meanwhile, launched an ecosystem of services for the digital-savvy mass and mass-affluent segment, aimed at making it easier for customers to manage their financial lives.


9 Trends That Are Influencing the Adoption of Devops and Devsecops

Despite the challenges of adopting these approaches, the potential gains to be made are generally seen as justifying this risk. For most development teams, this will first mean moving to a DevOps process, and then later evolving DevOps into DevSecOps. Beyond the operational gains that can be made during this transition lie a number of other advantages. One of the often overlooked effects of just how widespread DevOps has become is that, for many developers, it has become the default way of working. According to open source contributor and DevOps expert Barbara Ericson of Cloud Defense, “DevOps has suddenly become so ubiquitous in software engineering circles that you’ll be forgiven if you failed to realize the term didn’t exist until 2009...DevOps extends beyond the tools and best practices needed to accomplish its implementation. The successful introduction of DevOps demands a change in culture and mindset.” This trend is only likely to continue in the future, and could make it difficult for firms to hire talented developers if they are lagging behind on their own transition to DevOps.



Quote for the day:

"Leadership is about being a servant first." -- Allen West

Daily Tech Digest - April 17, 2021

Decoupling Frontends and Backends with GraphQL

GraphQL combines the best of APIs and Query Language. It is an API because a simple POST returns the data requested. And it is a query language because the user can ask for what she wants (as long as it is permissible in the definition of the GraphQL API endpoint). GraphQL has three distinct concepts: Types (such as Customer, Order, etc.) that the user (frontend developer) interacts with. These types are linked together in a graph — for example, a customer might have orders — hence the name GraphQL. It has an additional abstraction, an interface, that can be used to further hide types. This is particularly useful when there are multiple different implementations; Queries, such as customerById (queries are just entry points into the graph) return data of a type; and Resolvers, which describe the implementation of the queries and generation of the bits of data associated with types. For example, there might be a resolver that says the query customerById can be executed by issuing a SQL statement against a MySQL database, whereas the query orderByCustomer requires a GET against a REST endpoint.


IoT in Mining

Mining companies have overcome the challenge of connectivity by implementing more reliable connectivity methods and data-processing strategies to collect, transfer and present mission critical data for analysis. Satellite communications can play a critical role in transferring data back to control centers to provide a complete picture of mission critical metrics. Mining companies worked with trusted IoT satellite connectivity specialists such as ‘Inmarsat’ and their partner eco-systems to ensure they extracted and analyzed their data effectively. Cybersecurity will be another major challenge for IoT-powered mines over the coming years As mining operations become more connected, they will also become more vulnerable to hacking, which will require additional investment into security systems. Following a data breach at Goldcorp in 2016, that disproved the previous industry mentality that miners are not typically targets, 10 mining companies established the Mining and Metals Information Sharing and Analysis Centre (MM-ISAC) to share cyber threats among peers in April 2017.


BazarLoader Malware Abuses Slack, BaseCamp Clouds

According to researchers at Sophos, in the first campaign spotted, adversaries are targeting employees of large organizations with emails that purport to offer important information related to contracts, customer service, invoices or payroll. “One spam sample even attempted to disguise itself as a notification that the employee had been laid off from their job,” according to Sophos. The links inside the emails are hosted on Slack or BaseCamp cloud storage, meaning that they could appear to be legitimate if a target works at an organization that uses one of those platforms. In an era of remote working, those odds are good that this is the case. “The attackers prominently displayed the URL pointing to one of these well-known legitimate websites in the body of the document, lending it a veneer of credibility,” researchers said. “The URL might then be further obfuscated through the use of a URL shortening service, to make it less obvious the link points to a file with an .EXE extension.” If a target clicks on the link, BazarLoader downloads and executes on the victim’s machine. The links typically point directly to a digitally signed executable with an Adobe PDF graphic as its icon.


How the Biden Administration Can Make Digital Identity a Reality

Digital identity has already gained bipartisan support on Capitol Hill. In 2020, Representatives Bill Foster (D-IL) and John Katho (R-NY) introduced the Improving Digital Identity Act, designed to establish a nationwide approach to improving digital identity. Now, the Biden administration plans to leverage digital identity for modernization of public services, ranging from government assistance to healthcare to licensing. The act would be a step forward but wouldn't completely address needs in the public and private sectors. Rep. Foster notes that the bill would primarily address the government's need for digital identity, paying less attention to issues (e.g., transaction friction, fraud) facing enterprises and consumers. That said, the Biden administration must take a broader, holistic approach to digital identity, eliminating data siloing that would make future digital IDs unnecessarily purpose-specific. Any error would allow bad actors to access sensitive data and impersonate customers, resulting in fraudulent requests for government services, credit cards, loans, or licenses.


Manufacturing Performance Intelligence: How digital unlocks resilient, agile operations

Digital solutions have a huge role to play in enabling Industry 4.0 and driving sustainable practices. As manufacturers rapidly accelerated their adoption of digital operating models, they have been able to safeguard employee health, ensure commercial resilience and elevate performance using digital intelligence. This is the new opportunity for industries and AVEVA’s portfolio combines the operational data management of PI System with industrial analytics, enabling us to lead the way. By harnessing the power of information with artificial intelligence and human insight, AVEVA is leading the industry with Performance Intelligence. Schneider Electric’s network of Smart Factories was among the world’s first to transform operations, pioneering AVEVA’s Discrete Lean Management software and pivoting to cloud-based operating models to safeguard production. These changes transformed how we operate, cutting downtime by 44% and driving 21% increases in energy efficiency in key factories. The World Economic Forum recognized three Smart Factories as Advanced Manufacturing Lighthouses as a result


Designing & Managing for Resilience

The concept of shared capacity and reciprocity within an organization is more complex than simply directing teams to work together. Many organizations do have cross-functional work teams or attempt to break down organizational silos by rotating executives throughout the business. However, organizations are defined by reporting structures, functional units or product teams - where each have their own goals and objectives. In addition, an engineering leader is tasked with setting direction, vision and priorities for their teams for a given quarter or phase of the business lifecycle which may put them at different tempos than their counterparts. Systemic and difficult problems that span organizational boundaries can be emergent or continuously changing as different teams make attempts to mitigate the problems within their own scope of authority. This can make it difficult to coordinate clear goals and objectives with peers for inter-organizational initiatives. Therefore, a function of the resilient leader is to advocate for capacity sharing and reciprocity as part of their team’s goals and priorities. 


Cyber security for telehealth services

The goal of cybersecurity is to reduce the risk of cyber-attacks and to protect organizations and individuals from intentional and deliberate exploitation of security vulnerabilities in systems, networks, and technologies. You are done with teleconsultation on Practo and now you are about to checkout and you are offered cash withdrawal options with your debit or credit card or UPI, and like you, there are millions of users who are sharing such sensitive information on the platform, have you ever wondered how secured the information on practo is? From updated privacy policies to security-focused patents to use AI for Data Security each company increases its focus on data protection to promote user trust. With the increasing growth in the digital world, cybersecurity threats will continue to intensify as hackers learn to adapt to security strategies. This will increase the overall need for cybersecurity by companies that will be paying more and more highly qualified security professionals to protect their vulnerable assets from cyber-attacks. Telehealth means you no more have to travel, your appointment with the physicians takes place through a TV screen in between you.


Beyond the Quickstart: Running Apache Kafka as a Service on Kubernetes

Kubernetes provides many networking options such as node ports, ingress, load balancers and, with Red Hat OpenShift, routes as well. Kafka requires the producers and consumers to talk to individual brokers based on the placement of partitions and partition leaders. Based on the different networking options, you have to configure your network correctly so that the producers and consumers are able to individually address the brokers. Kafka exposes the “advertised.listeners” option in the broker configuration, which allows the clients to directly connect to the brokers. When configuring the Kubernetes services to allow access to the brokers, you will also configure the “advertised.listeners” in the broker to ensure that producers and consumers are able to connect to the individual brokers. Kubernetes abstracts infrastructure, following an interface pattern wherein third-party providers can create their own plugins that follow a standard interface definition. So you could also build your own routing layer to make sure you are able to address the brokers. Kubernetes allows you to do this via ingress resources.


Using The Internet Of Things For Smart Office Automation

Scheduling is critical in a post-COVID office. IoT technology makes it much easier to keep staff at an optimum number of people throughout the day to ensure compliance with safety practices. Companies can create a check-in process and monitor any potential warning signs. This system enables companies to keep track of who was in the same room and parked their cars using smart parking solutions. Smart scheduling can cut down overtime and stagger start and leave times so that people can have a more flexible schedule while keeping the number of people in the same areas at a minimum. Smart scheduling can automatically create a master plan that considers all staff members’ preferences and meets the company’s overall requirements. Smart scheduling for IoT-enabled devices and networks is useful in a post-COVID office environment. Companies can automatically create schedules for IoT items needed to match employee schedules. This is convenient if employees call in sick because their workspaces can adjust automatically if they are not at work. Making real-time changes to IoT schedules is one of the best uses of smart office technology.


Bank Groups Object to Proposed Breach Notification Regulation

The four banking groups contend that compliance with the new regulation would prove too burdensome for financial institutions. "We share the goal to develop a flexible incident notification framework offering early awareness of disruptions, while also being appropriately scoped to avoid over-reporting and unnecessary burden for the banking industry, third-party service providers and the supervisory community," the groups wrote. The proposed regulation bases its definition of a reportable computer security incident on the National Institute of Standards and Technology's definition. The NIST definition is: "An occurrence that results in actual or potential jeopardy to the confidentiality, integrity or availability of an information system or the information the system processes, stores or transmits or that constitutes a violation or imminent threat of violation of security policies, security procedures or acceptable use policies." The four financial groups wrote that the NIST definition is too broad, and if it's included in a breach notification requirement, it would result in insignificant occurrences becoming reportable incidents.



Quote for the day:

"Effective team leaders realize they neither know all the answers, nor can they succeed without the other members of the team." --  Katzenbach & Smith

Daily Tech Digest - April 15, 2021

Cyber criminals are targeting the cloud — here’s how to defend against them

While threat actors may use methods to actively infiltrate a company’s defences, sometimes the vulnerabilities are already there. CSPs are usually quick to patch known vulnerabilities without requiring customer interaction. However, when cloud services involve the customer in managing the software, oversight can be tricky due to the complexity of the environment. Businesses should prioritise regular scanning and patching of known vulnerabilities with the latest version of each type of software they’re running on their system. On top of this, IT leaders should maintain an up-to-date inventory of assets to ensure visibility of all endpoints that require patching. In the rush to gain a competitive advantage, cloud environments are evolving rapidly with organisations using a hybrid or multi-cloud approach. This complexity can lead to misconfiguration if set up incorrectly. Misconfigured cloud infrastructures can expose data or resources to the public internet, and failure to implement encryption or multi-factor authentication can allow actors to access cloud-related tools, data, assets, or systems.


6 Key Forces Shaping Technology and Service Providers Through 2025

COVID-19 shut down businesses, supply chains and entire countries, and changed the way companies buy, sell and work. But other events like trade wars, legislation, and regulations can all impact technology providers. It’s not about predicting these events as much as identifying existing trends (i.e., work from home, e-commerce) that will accelerate if these occur. Customers demand products and services that meet their particular business or IT needs. In a broader sense, customer demand and expectations are shaped by cultural changes and world events (e.g., stay-at-home orders increasing demand for distributed work tools). User experiences and trends like mobility or subscription and freemium pricing, which were made popular in consumer markets, have led IT customers to want the same benefits from technology providers. Emerging technologies may seem like novelties when they first appear, but when these technologies become a trend, they can profoundly shape buying and selling behavior and enable new business models. Over the next several years, today’s immature technologies and “weak signals” have the potential to disrupt what your product does, who it serves and how you deliver it.


Fast Data - It’s Not Your Grandfather’s Operational Data

Fast data enables full-circle delivery of data that is “in motion.” In other words, it’s generated and consumed instantly by interactive applications running on large numbers of devices. Fast data enables organizations to act on insights gained from user interactions as these insights are generated at the point of the interaction. And because decisions or actions take place right at the front-end, fast data architectures are, by definition, distributed and real-time. Big data is focused on capturing data, storing it, and processing it periodically in batches. A fast data architecture, on the other hand, processes events in real time. Big data focuses on volume, while with fast data, the emphasis is on velocity. Here’s an example. A credit card company might want to create credit risk models based on demographic data. That’s a big data challenge. A fast data architecture would be required if that credit card company wants to send fraud alerts to customers in real-time, when a suspicious activity occurs in their accounts. Think of FedEx. To track millions of packages and ensure on-time and accurate delivery across the planet, FedEx needs access to the right real-time data to perform real-time analysis and deliver the right interaction—right away, right there, not a day later.


Better Software Writing Skills in Data Science: Dead Programs Tell No Lies

When you figure out that something “bad” happen to your program, you know this should not happen and you know there is no way around, the way to throw and exception higher can be via an assert. This will throw an exception to the higher up program which will have to decide what to do with it. An example would be you expect an int as input and you get a string, this is contract breaking, the higher up program ask you to handle improper data, why would your program decide why the contract was broken? It should be the responsibility of the caller to handle that exception properly. That might warrant a assert right there. Depending on the organization you work in, when an how to use exceptions and asserts might get philosophical. On the other hand it could also be subject to very specific rules. There might be really valid reason why an organization might prefer an approach over another. Learn the rules, and if there is no rule, have discussion around it and apply your best judgement. In any case, dead programs tells no lies. Better kill it than having to deal with polluted data a year in the future.


How to avoid social engineering scams

Social engineering is a collective term for ways in which fraudsters manipulate people into performing certain actions. It’s generally used in an information security context to refer to the tactics crooks use to trick people into handing over sensitive information or exposing their devices to malware. This often comes in the form of phishing scams – messages supposedly from a legitimate sender that ask the recipient to download an attachment or follow a link that directs them to a bogus website. However, social engineering isn’t always malicious. For example, say you need someone to do you a favour, but you’re unsure that they’ll agree if you ask them apropos of nothing. You might grease the wheels by offering to do something for them first, making them feel obliged to say yes when you ask them to return the favour. That’s a form of social engineering. You’re performing an action that will compel the person to do something that will benefit you. Understanding social engineering in this context helps you see that social engineering isn’t simply an IT problem. It’s a vulnerability in the way we make decisions and perceive others – something we delve into more in the next section.


Mesh networking vs. traditional Wi-Fi routers: What is best for your home office?

Before changing your setup, you should also consider your ISP package. If you're subscribed to a low-speed offering, new equipment is not going to necessarily help. Instead, package upgrades could be a better option. If you are a sole user and need a stable, powerful connection -- such as for resource-hungry work applications or gaming -- a traditional router may be all you need. Wired should be quicker than wireless, and so investment in a simple Ethernet cable, easily picked up for $10 to $15, could be enough. Wi-Fi range extenders, too, could be considered as an alternative to mesh if you just need to boost coverage in some areas, and will likely be less expensive than purchasing individual mesh nodes. Some vendors also offer mesh 'bolt-ons' such as Asus' AiMesh, which can connect up existing routers to create a mesh-like coverage network without ripping everything out and starting again. However, mesh networking is here to stay and at a time when many of us are now in the home rather than traditional home offices, a mesh setup could be a future-proof investment. 


Intel Report Spotlights Importance Of Transparency In Cybersecurity

Transparency and security assurance are important, but the Intel report also reveals other factors that businesses consider for endpoint and network infrastructure purchasing decisions. Interoperability with existing tools and platforms ranked highest with 63%, followed by installation cost (58%), system complexity (57%), vendor support (55%), and scalability issues (53%). One area that is particularly interesting is the intersection of hardware and software and how they can work together to solve cybersecurity problems in innovative ways. More than three-fourths of the survey participants indicated that it is highly important for technology providers to offer hardware assisted capabilities to mitigate software exploits. More than 70% also noted that it is important for technology providers to offer mechanisms and security controls to protect distributed workloads. Suzy Greenberg, Vice President of Intel Product Assurance and Security for Intel, joined me recently on the TechSpective Podcast to talk about this report and some of the insights and trends she finds interesting.


Security Bug Allows Attackers to Brick Kubernetes Clusters

The impact could be fairly wide: “As of Kubernetes v1.20, Docker is deprecated and the only container engines supported are CRI-O and Containerd,” Sasson explained. “This leads to a situation in which many clusters use CRI-O and are vulnerable. In an attack scenario, an adversary may pull a malicious image to multiple different nodes, crashing all of them and breaking the cluster without leaving a way to fix the issue other than restarting the nodes.” When a container engine pulls an image from a registry, it first downloads its manifest, which has the instructions on how to build the image. Part of that is a list of layers that compose the container file system, which the container engine reads and then downloads and decompresses each layer. “An adversary could upload to the registry a malicious layer that aims to exploit the vulnerability and then upload an image that uses numerous layers, including the malicious layer, and by that create a malicious image,” Sasson explained. “Then, when the victim pulls the image from the registry, it will download the malicious layer in that process and the vulnerability will be exploited.” Once the container engine starts downloading the malicious layer, the end result is a deadlock.


What is the market opportunity for NFTs?

NFTs have seen massive increases in trade volume and users in recent times. Investments in NFTs rose by 299% across 2020, and the NFT market’s sales volume grew by 2,882% in February alone. This increased interest resulted in part of the improving infrastructure surrounding NFTs, which has supported full stack services from trading venues, minting platforms, marketplaces and more. While detractors have suggested that the current crest of interest represents a bubble, experts have pointed to the fact that technology of NFTs is strong enough to survive a possible crash, and is expected to be around for quite some time. According to Beeple, a digital artist who recently made almost $70 million from his NFT sale, the technology will support any work or piece of real value. Similarly, the new owners of Beeple’s record setting piece of artwork, Vignesh (Metakovan) and Anand (Twobadour) believe their transaction represents a paradigm shift in how the world perceives art. They see NFTs as having an equalizing effect between the traditionally dominant West and the global South.


Applications, Challenges For Using AI In Fabs

One of the main applications for machine learning is defect detection and classification. The first step is using machine learning to detect actual defects and ignore noise. We are seeing many examples where machine learning is much better at extracting the actual killer defect signal from a noisy background of process and pattern variations. The second step is to leverage machine learning to classify defects. The challenge these days is that when optical inspectors run at high sensitivity to capture the most subtle, critical defects on the most critical layers, other anomalies are also detected. Machine learning is first applied to the inspection results to optimize the defect sample plan sent for review. Then, high-resolution SEM images are taken of those sites and additional machine learning is used to analyze and classify the defects to provide fab engineers with accurate information about the defect population – actionable data to drive process decisions. An emerging application is to make use of machine learning to be more predictive about where to inspect and measure.



Quote for the day:

"Real leaders are ordinary people with extraordinary determinations." -- John Seaman Garns

Daily Tech Digest - April 14, 2021

How far have we come? The evolution of securing identities

With internal, enterprise-facing identity, these individuals work for your organization and are probably on the payroll. You can make them do things that you can’t ask customers to do. Universal 2nd Factor (U2F) is a great example. You can ship U2F to everyone in your organization because you’re paying them. Plus, you can train internal staff and bring them up to speed with how to use these technologies. We have a lot more control in the internal organization. Consumers are much harder. They are more likely to just jump ship if they don’t like something. Adoption rates of technologies, like multifactor authentication, are extremely low in consumer land because people don’t know what it is or the value proposition. We also see organizations reticent to push it. A few years ago, a client had a 1 percent adoption rate of two-factor authentication. I asked, “Why don’t you push it harder?” They said that every time they have more people using two-factor authentication, there are more people who get a new phone and don’t migrate their soft token or save the recovery codes. Then, they call them up and say, “I have my username or password but not my one-time password.


Informatica debuts its intelligent data management cloud

As data becomes more valuable, so does data management, Informatica argues. Its IDMC offers more than 200 intelligent cloud services, powered by Informatica's AI engine CLAIRE. It applies AI to metadata to give an organization an understanding of its "data estate," Ghai explained. The "data estate" tells you about the fragmentation of data -- its location and the various domains of data. "And through that insight," Ghai said, Informatica will "automate the ability to connect to data, to build data pipelines, process data, provision it for analytics... to apply advanced transformations to cleanse that data and trust it... to match, merge and build a single source of truth." From there, the platform aims to make data more accessible to business users with features like the "data marketplace." With the marketplace, users can "shop for data" much as one would shop for consumer goods on the Amazon marketplace, Ghai explained. The IDMC is micro-services based and API-driven, with elastic and serverless processing. It's built for hybrid and multi-cloud environments. The platform is already running at scale, processing more than 17 trillion transactions each month.


Microservices in the Cloud-Native Era

With developer tools and platforms like Docker, Kubernetes, AWS, GitHub, etc., software development has become very approachable and easy. You have a monolithic architecture and three million lines of code. Making changes to the code base whenever required and releasing new features was not an easy task before. It created a lot of dilemmas between the developer teams. Finding the mistake that was causing the code to break was a monumental task. That’s where microservices architecture shines. Many companies have recently moved from their humongous monolithic architecture to microservices architecture for a bright future. There are many advantages of shifting to microservices architecture. While a monolithic application puts all of its functionality into a single code base and scales by replicating on multiple servers, a microservices architecture breaks an application down into several smaller services. It then segments them by logical domains. Together, microservices communicate with one another over APIs to form what appears as a single application to end-users. The problem with a monolithic application, when something goes wrong, the operations team blames development, and development blames QA.


Modern Data Warehouse & Reverse ETL

“Reverse ETL” is the process of moving data from a modern data warehouse into third party systems to make the data operational. Traditionally data stored in a data warehouse is used for analytical workloads and business intelligence (i.e. identify long-term trends and influencing long-term strategy), but some companies are now recognizing that this data can be further utilized for operational analytics. Operational analytics helps with day-to-day decisions with the goal of improving the efficiency and effectiveness of an organization’s operations. In simpler terms, it’s putting a company’s data to work so everyone can make better and smarter decisions about the business. As examples, if your MDW ingested customer data which was then cleaned and mastered, that customer data can then by copied into multiple SaaS systems such as Salesforce to make sure there is a consistent view of the customer across all systems. Customer info can also be copied to a customer support system to provide better support to that customer by having more info about that person, or copied to a sales system to give the customer a better sales experience.


The Microsoft-Nuance Deal: A new push for voice technology?

Microsoft has had its hand in voice technology since debuting its virtual assistant Cortana in 2015 as part the initial Windows 10 release. Since then, Cortana has evolved to support Android and iOS devices, Xbox, the Edge browser, Windows Mixed Reality headsets, and third-party devices such as thermostats and smart speakers. According to Microsoft, Cortana is currently used by more than 150 million people. More recently, the company shifted Cortana to position it as more of an office assistant rather than for more general use. “Voice recognition is gaining momentum and will be used in every type of industry — from transcription to command-and-control types of applications — and acquiring a leading vendor in this area just makes sense,” Pleasant said. She stressed that as users become familiar with Cortana, Siri and Amazon's Alexa at home, they expect to see similar speech-enabled technologies at work. She also noted that Microsoft is one of the few companies with the resources to acquire a company like Nuance, allowing it to jump ahead of rivals who might have wanted to do the same thing.


Get your firm to say goodbye to password headaches

In a passwordless environment, no password storage or management is needed. Therefore, IT teams are no longer burdened by setting password policies, detecting leaks, resetting forgotten passwords and having to comply with password storage regulation. It’s fair to say that for many helpdesk teams, password reset requests will be the most commonly asked-for thing (from users). Past research has determined that for some larger organizations, up to $1 million per year can be spent on staffing and infrastructure to handle password resets alone. Resetting passwords is probably not a particularly complex issue for most IT departments to deal with, but it’s the sheer number of requests makes handling these requests an extremely time-consuming task. Just how much time does that take away from helpdesks on a daily, weekly or monthly basis? It’s one of those hidden costs that your firm will be incurring that can be streamlined by giving people passwordless connections into their environment. Passwords remain a weakness for those trying to secure customer and corporate data and passwords are the number one target of cyber criminals.


Modernising the insurance industry with a shared IT platform model

Pockets of the insurance industry are heading this way, by, for example, using vehicle trackers that reward good driving with lower premiums. But behind the scenes for many organisations is a mass of hugely complex products and equally unwieldy legacy systems that don’t provide them with the ability to work in a way that is agile and digital-first. Assess your systems as they stand today and you may find that several, or possibly hundreds, have been redundant for some time. Eliminating these systems, which are nothing more than drains on the business’ resources, will allow for a greater level of agility. By moving away from cumbersome legacy systems that are no longer fit for purpose, insurers can create a simplified system that unifies silos, making everyday work more efficient, and saves the business money. Money that they can reinvest into creating a customer-centric company that can rival its strongest competitors. Imagine a world where you could simplify your product range, providing cover for the highest number of people with the fewest number of insurance products.


5 Great Ways To Achieve Complete Automation With AI and ML

The self-healing technique in test automation solves major issues that involve test script maintenance where automation scripts break at every stage of change in object property, including name, ID, CSS, etc. This is where dynamic location strategy comes into the picture. Here, programs automatically detect these changes and fix them dynamically without human intervention. This changes the overall approach to test automation to a great extent as it allows teams to utilize the shift-left approach in agile testing methodology that makes the process more efficient with increased productivity and faster delivery. ... This self-healing technique saves a lot of time invested by developers in identifying the changes and updating them simultaneously in the UI. Mentioned below is the end-to-end process flow of the self-healing technique which is handled by artificial intelligence-based test platforms. As per this process flow, the moment an AI engine figures out that the project test may break because the object property has been changed, it extracts the entire DOM and studies the properties. It runs the test cases effortlessly without anyone getting to know that any such changes have been made using dynamic location strategy.


Apache Software Foundation retires slew of Hadoop-related projects

ASF's Vice President for Marketing & Publicity, Sally Khudairi, who responded by email, said "Apache Project activity ebbs and flows throughout its lifetime, depending on community participation." Khudairi added: "We've...had an uptick in reviewing and assessing the activity of several Apache Projects, from within the Project Management Committees (PMCs) to the Board, who vote on retiring the Project to the Attic." Khudairi also said that Hervé Boutemy, ASF's Vice President of the Apache Attic "has been super-efficient lately with 'spring cleaning' some of the loose ends with the dozen-plus Projects that have been preparing to retire over the past several months." Despite ASF's assertion that this big data clearance sale is simply a spike of otherwise routine project retirements, it's clear that things in big data land have changed. Hadoop has given way to Spark in open source analytics technology dominance, the senseless duplication of projects between Hortonworks and the old Cloudera has been halted, and the Darwinian natural selection process among those projects completed.


DNS Vulnerabilities Expose Millions of Internet-Connected Devices to Attack

In a new technical report, Forescout and JSOF describe the set of nine vulnerabilities they discovered as giving attackers a way to knock devices offline or to download malware on them in order to steal data and disrupt production systems in operational technology environments. Among the most affected are organizations in the healthcare and government sectors because of the widespread use of devices running the vulnerable DNS implementations in both environments, Forescout and JSOF say. According to the two companies, patches are available for the vulnerabilities in FreeBSD, Nucleus NET, and NetX. Device vendors using the vulnerable stacks should provide updates to customers. But because it may not always be possible to apply patches easily, organizations should consider mitigation measures, such as discovering and inventorying vulnerable systems, segmenting them, monitoring network traffic, and configuring systems to rely on internal DNS servers, they say. The two companies also released tools that other organizations can use to find and fix DNS implementation errors in their own products. 




Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne