Daily Tech Digest - April 19, 2021

Time to Modernize Your Data Integration Framework

You need to be able to orchestrate the ebb and flow of data among multiple nodes, either as multiple sources, multiple targets, or multiple intermediate aggregation points. The data integration platform must also be cloud native today. This means the integration capabilities are built on a platform stack that is designed and optimized for cloud deployments and implementation. This is crucial for scale and agility -- a clear advantage the cloud gives over on-premises deployments. Additionally, data management centers around trust. Trust is created through transparency and understanding, and modern data integration platforms give organizations holistic views of their enterprise data and deep, thorough lineage paths to show how critical data traces back to a trusted, primary source. Finally, we see modern data analytic platforms in the cloud able to dynamically, and even automatically, scale to meet the increasing complexity and concurrency demands of the query executions involved in data integration. The new generation of some data integration platforms also work at any scale, executing massive numbers of data pipelines that feed and govern the insatiable appetite for data in the analytic platforms.


Will codeless test automation work for you?

While outsiders view testing as simple and straightforward, it's anything but true. Until as recently as the 1980s, the dominant idea in testing was to do the same thing repeatedly and write down the results. For example, you could type 2+3 onto a calculator and see 5 as a result. With this straightforward, linear test, there are no variables, looping or condition statements. The test is so simple and repeatable, you don't even need a computer to run this test. This approach is born from thinking akin to codeless test automation: Repeat the same equation and get the same result each time for every build. The two primary methods to perform such testing are the record and playback method, and the command-line test method. Record and playback tools run in the background and record everything; testers can then play back the recording later. Such tooling can also create certification points, to check the expectation that the answer field will become 5. Record and playbook tools generally require no programming knowledge at all -- they just repeat exactly what the author did. It's also possible to express tests visually. Command-driven tests work with three elements: the command, any input values and the expected results.


Ghost in the Shell: Will AI Ever Be Conscious?

It’s certainly possible that the scales are tipping in favor of those who believe AGI will be achieved sometime before the century is out. In 2013, Nick Bostrom of Oxford University and Vincent Mueller of the European Society for Cognitive Systems published a survey in Fundamental Issues of Artificial Intelligence that gauged the perception of experts in the AI field regarding the timeframe in which the technology could reach human-like levels. The report reveals “a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075.” Futurist Ray Kurzweil, the computer scientist behind music-synthesizer and text-to-speech technologies, is a believer in the fast approach of the singularity as well. Kurzweil is so confident in the speed of this development that he’s betting hard. Literally, he’s wagering Kapor $10,000 that a machine intelligence will be able to pass the Turing test, a challenge that determines whether a computer can trick a human judge into thinking it itself is human, by 2029.


Is your technology partner a speed boat or an oil tanker?

The opportunity here really cannot be underestimated. It is there for the taking by organisations who are willing to approach technological transformation in a radically different way. This involves breaking away from monolithic technology platforms, obstructive governance procedures, and the eye-wateringly expensive delivery programmes so often facilitated by traditional large consulting firms. The truth is, you simply don’t need hundreds of people to drive significant change or digital transformation. What you do need is to adopt new technology approaches, re-think operating models and work with partners who are agile experts, who will fight for their clients' best interests and share their knowledge to upskill internal staff. Hand picking a select group of top individuals to work in this way provides a multiplier of value when compared to hiring greater numbers of less experienced staff members. Of course, external partners must be able to deliver at the scale required by the clients they work with. But just as large organisations have to change in order to embrace the benefits of the digital age, consulting models too must adapt to offer the services their clients need at the value they deserve.


Best data migration practices for organizations

the internal IT team needs to work closely with the service provider. To thoroughly understand and outline the project requirements and deliverables. This is to ensure that there is no aspect that is overlooked, and both sides are up to speed on the security and regulatory compliance requirements. Not just the vendor, but the team members and all the tools used in the migration need to meet all the necessary certifications to carry out a government project. Of course, certain territories will have more stringent requirements than others. Finally, an effective transition or change management strategy will be important to complete the transition. Proper internal communications and comprehensive training for employees will help everyone involved be aware of what’s required from them, including grasping any new processes or protocols and circumnavigating any productivity loss during the data migration. While the nitty-gritty of a public sector migration might be similar to a private company’s, a government data migration can be a much longer and unwieldy process, especially with the vast number of people and the copious amounts of sensitive data involved.


Will AI dominate in 2021? A Big Question

Agreeing with the fact that the technologies are captivating us completely with their interesting innovations and gadgets. From Artificial intelligence to machine learning, IoT, big data, virtual and augmented reality, Blockchain, and 5G; everything seems to take over the world way too soon. Keeping it to the topic of Artificial Intelligence, this technology has expanded its grip on our lives without even making us realize that fact. In the days of the pandemic, the IT experts kept working from home and the tech-grounds kept witnessing smart ideas and AI-driven innovations. Artificial Intelligence is also the new normal. Artificial Intelligence is going to be the center of our new normal and it will be driving the other nascent technologies to the point towards success. Soon, AI will be the genius core of automated and robotic operations. In the blink of an eye, Artificial Intelligence can be seen adopted by companies so rapidly and is making its way into several sectors. 2020 has seen this deployment on a wider scale as the AI experts were working from home but the progress didn’t see a stop in the tech fields.


The promise of the fourth industrial revolution

There are some underlying trends in the following vignettes. The internet of things and related technologies are in early use in smart cities and other infrastructure applications, such as monitoring warehouses, or components of them, such as elevators. These projects show clear returns on investment and benefits. For instance, smart streetlights can make residents’ lives better by improving public safety, optimizing the flow of traffic on city streets, and enhancing energy efficiency. Such outcomes are accompanied with data that’s measurable, even if the social changes are not—such as reducing workers’ frustration from spending less time waiting for an office elevator. Early adoption is also found in uses in which the harder technical or social problems are secondary, or, at least, the challenges make fewer people nervous. While cybersecurity and data privacy remain important for systems that control water treatment plants, for example, such applications don’t spook people with concerns about personal surveillance. Each example has a strong connectivity component, too. None of the results come from “one sensor reported this”—it’s all about connecting the dots. 


How Hundred-Year-Old Enterprises Improve IT Ops using Data and AIOps

Sam Chatman, VP of IT Ops at OneMain Financial, explains the impact of levering AIOps is, “Being able to understand what is released, when it’s released, and the potential impacts of that release. We are overcoming alert fatigue, and BigPanda will be our Watson of the Enterprise Monitoring Center (EMC) by automating alerts, opening incident tickets, and identifying those actions to improve our mean time to recovery. This helps us keep our systems up when our users and customers need them to be.” For other organizations, it might help to visualize what naturally happens to IT operations’ monitoring programs over time. Every time systems go down and IT gets thrown under the bus for a major incident, they add new monitoring systems and alerts to improve their response times. As new multicloud, database, and microservice technologies emerged, they add even more monitoring tools and increased observability capabilities. Having more operational data and alerts is a good first step, but then alert fatigue kicks in when tier-one support teams respond and must make sense over dozens to thousands of alerts.


A perfect storm: Why graphics cards cost so much now

Demand for gaming hardware blew up during the pandemic, with everyone bored and stuck at home. In the early days of the lockdowns in the United States and China, Nintendo’s awesome Switch console became red-hot. Even replacement controllers and some games became hard to find. ... Beyond the AMD-specific TSMC logjam, the chip industry in general has been suffering from supply woes. Even automakers and Samsung have warned that they’re struggling to keep up with demand. We’ve heard whispers that the components used to manufacture chips—from the GDDR6 memory used in modern GPUs to the substrate material fundamentally used to construct chips—have been in short supply as well. Seemingly every industry is seeing vast demand for chips of all sorts right now. ... High demand and supply shortages are the perfect recipe for folks looking to flip graphics cards and make a quick buck. The second they hit the streets, the current generation of GPUs were set upon by “entrepreneurs” using bots to buy up stock faster than humans can, then selling their ill-gotten wares for a massive markup on sites like Ebay, StockX, and Craigslist.


How to sharpen machine learning with smarter management of edge cases

Production is when AI models prove their value, and as AI use spreads, it becomes more important for businesses to be able to scale up model production to remain competitive. But as Shlomo notes, scaling production is exceedingly difficult, as this is when AI projects move from the theoretical to the practical and have to prove their value. “While algorithms are deterministic and expected to have known results, real world scenarios are not,” asserts Shlomo. “No matter how well we will define our algorithms and rules, once our AI system starts to work with the real world, a long tail of edge cases will start exposing the definition holes in the rules, holes that are translated to ambiguous interpretation of the data and leading to inconsistent modeling.” That’s much of the reason why more than 90% of c-suite executives at leading enterprises are investing in AI, but fewer than 15% have deployed AI for widespread production. Part of what makes scaling so difficult is the sheer number of factors for each model to consider. In this way, HITL enables faster, more efficient scaling, because the ML model can begin with a small, specific task, then scale to more use cases and situations.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - April 18, 2021

How Can Financial Institutions Prepare for AI Risks?

In exploring the potential risks of AI, the paper provided “a standardized practical categorization” of risks related to data, AI and machine learning attacks, testing, trust, and compliance. Robust governance frameworks must focus on definitions, inventory, policies and standards, and controls, the authors noted. Those governance approaches must also address the potential for AI to present privacy issues and potentially discriminatory or unfair outcomes “if not implemented with appropriate care.” In designing their AI governance mechanisms, financial institutions must begin by identifying the settings where AI cannot replace humans. “Unlike humans, AI systems lack the judgment and context for many of the environments in which they are deployed,” the paper stated. “In most cases, it is not possible to train the AI system on all possible scenarios and data.” Hurdles such as the “lack of context, judgment, and overall learning limitations” would inform approaches to risk mitigation, the authors added. Poor data quality and the potential for machine learning/AI attacks are other risks financial institutions must factor in.


How to turn everyday stress into ‘optimal stress’

What triggers a stress response in one person may hardly register with another. Some people feel stressed and become aggressive, while others withdraw. Likewise, our methods of recovery are also unique—riding a bike, for instance, versus reading a book. Executives, however, aren’t usually aware of their stress-related patterns and idiosyncrasies and often don’t realize the extent of the stress burden they are already carrying. Leadership stereotypes don’t help with this. It’s no surprise that we can’t articulate how stress affects us when we equate success with pushing boundaries to excess, fighting through problems, and never admitting weakness. Many people we know can speak in detail about a favorite vacation but get tongue-tied when asked what interactions consistently trigger stress for them, or what time of day they feel most energized. To reach optimal stress, we need to be conscious of our stress; in neurological terms, it’s the first step toward lasting behavior change. As the psychiatrist and author Daniel Siegel writes, “Where attention goes, neural firing flows and neural connection grows.”9 And it is these newly grown neurological pathways that define our behavior and result in new habits.


How to Empower Transformation and Create ROI with Intelligent Automation

CIOs see ROI delivered in multiple ways. For example, a recent Forrester study identified that Bizagi’s platform offered 288% financial returns. CIOs seek benefits other than cost savings, such as increased net promoter scores, realized upsell opportunities, and improved end-user productivity gains. ... Only that automation sets a very high bar on what machines can perform reliably, especially when employees often interpret automation to mean “without any human involvement.” For example, you can automate many steps in a loan application and its approval processes when the applicant checks all the right boxes. However, most financial transactions have complex exceptions and actions that require orchestration across multiple systems. Managers and employees know the daily complications and oversimplifying their jobs with only rudimentary automations often leads to a backlash from vocal detractors. That’s why CIOs and IT leaders need more than simple task automation, departmental applications, or one-off data analysis. Digital leaders recognize the importance of intelligence and orchestration to modernize workflows, meet customer expectations, leverage machine learning capabilities, and enable implementing of the required business rules.


Understand Bayes’ Theorem Through Visualization

Before going to any definition, normally Bayes’ Theorem are used when we have a hypothesis and we have observed some evidence and we would like to know the probability of the hypothesis holds given that the said evidence is true. Now it may sound a bit confusing, let’s use the above visualization for a better explanation. In the example, we want to know the probability of selecting the female engineer given who has finished Ph.D. education. The first thing we need is the probability of selecting the female engineer from the population without considering any evidence. The term P(H) is called “prior”. ... As we know Bayes’ theorem is branching from Bayesian statistics, which relies on subjective probabilities and uses Bayes’ theorem to update the knowledge and beliefs regarding the events and quantities of interest based on data. Hence, based on some knowledge, we can draw some initial inferences on the system (“prior” in Bayes) and then “update” these inferences based on the data and new data to obtain the “posterior”. Moreover, there are terms like Bayesian inference and frequentist statistical inference, which is not covered in this article. 


Leveraging Geolocation Data for Machine Learning: Essential Techniques

Fortunately, we don’t have to worry about parsing these different formats and manipulating low-level data structures. We can use the wonderful GeoPandas library in Python that makes all this very easy for us. It is built on top of Pandas, so all of the powerful features of Pandas are already available to you. It works with GeoDataFrames and GeoSeries which are “spatially-aware” versions of Pandas DataFrames and Series objects. It provides a number of additional methods and attributes that can be used to operate on geodata within a DataFrame. A GeoDataFrame is nothing but a regular Pandas DataFrame with an extra ‘geometry’ column for every row that captures the location data. Geopandas can also conveniently load geospatial data from all of these different geo file formats into a GeoDataFrame with a single command. We can perform operations on this GeoDataFrame in the same way regardless of the source format. This abstracts away all of the differences between these formats and their data structures.


Why Probability Theory is Hard

First, probability theorists don’t even agree what probability is or how to think about it. While there is broad consensus about certain classes of problems involving coins, dice, coloured balls in perfectly mixed bags and lottery tickets, as soon as we move into practical probability problems with more vaguely defined spaces of outcome, we are served with an ontological omelette of frequentism, Bayesianism, Kolmogorov axioms, Cox’s theory, subjective, objective, outcome spaces and propositional credences. Even if the probationary probability theorist is eventually indoctrinated (by choice or by accident of course instructor) into one or other school, none of these frameworks is conceptually easy to access. Small wonder that so much probabilistic pedagogy is boiled down to methodological rote learning and rules of thumb. There’s more. Probability theory is often not taught very well. The notation can be confusing; and don’t get me started on measure theory. The good news is that in terms of practical applications, very little can get you a very long way. 


Open-source, cloud-native projects: 5 key questions to assess risk

Another important indicator of risk relates to who owns or controls an open-source project. From a risk perspective, projects with neutral governance, where decisions are made by people from a variety of different companies, present a lower risk. The lowest-risk projects are ones that fall under vendor-neutral foundations. Kubernetes has been successful in part because it is shepherded by the Cloud Native Computing Foundation (CNCF). Putting Kubernetes into a neutral foundation provided a level playing field where people from different companies could work together as equals, to create something that benefits the entire ecosystem. The CNCF focuses on helping cloud-native projects set themselves up to be successful with resource documents, maintainer sessions, and help with various administrative tasks. In contrast, open-source projects controlled by a single company have higher risk because they operate at the whims of that company. Outside contributors have little recourse if that company decides to go in a direction that doesn't align with the expectations of the community's other participants. This can manifest as licensing changes, forks, or other governance issues within a project.


Interpreted vs. compiled languages: What's the difference?

In contrast to compiled languages, interpreted languages generate an intermediary instruction set that is not recognizable as source code. The intermediary is not architecture specific as machine code, either. The Java language calls this intermediary form bytecode. This intermediary deployment artifact is platform agnostic, which means it can run anywhere. But one caveat is that each runtime environment needs to have a preinstalled interpreter. The interpreter converts the intermediary code into machine code at runtime. The Java virtual machine (JVM) is the required interpreter that must be installed in any target environment in order for applications packaged and deployed as bytecode to run. The benefit of applications built with an interpreted language is that they can run on any environment. In fact, one of the mantras of the Java language when it was first released was "write once, run anywhere," as Java apps were not tied to any one OS or architecture. The drawback to an interpreted language is that the interpretation step consumes additional clock cycles, especially in comparison to applications packaged and deployed as machine code. 


Disrupting the disruptors: Business building for banks

The strategic target of a new build should be nothing less than radical disruption. Banks should aim not only to expand their own core offerings but also to create a unique combination of products and functionality that will disrupt the market. Successful new launches come with a clear sense of mission and direction, as well as a road map to profitability (see sidebar “Successful business builders are realistic about the journey”). One regional digital attacker in Asia targeted merchant acquiring and developed a network with more than 700,000 merchants. In just four months, it created a product with the capacity to process payments through QR codes at the point-of-sale systems of the two main merchant acquirers in the region and to transfer money between personal accounts. In another case, an incumbent bank launched a state-of-the-art digital solution in just ten months. In China, a leading global bank launched a digital-hybrid business that focuses on financial planning and uses social media to connect with customers. A midsize Asian bank, meanwhile, launched an ecosystem of services for the digital-savvy mass and mass-affluent segment, aimed at making it easier for customers to manage their financial lives.


9 Trends That Are Influencing the Adoption of Devops and Devsecops

Despite the challenges of adopting these approaches, the potential gains to be made are generally seen as justifying this risk. For most development teams, this will first mean moving to a DevOps process, and then later evolving DevOps into DevSecOps. Beyond the operational gains that can be made during this transition lie a number of other advantages. One of the often overlooked effects of just how widespread DevOps has become is that, for many developers, it has become the default way of working. According to open source contributor and DevOps expert Barbara Ericson of Cloud Defense, “DevOps has suddenly become so ubiquitous in software engineering circles that you’ll be forgiven if you failed to realize the term didn’t exist until 2009...DevOps extends beyond the tools and best practices needed to accomplish its implementation. The successful introduction of DevOps demands a change in culture and mindset.” This trend is only likely to continue in the future, and could make it difficult for firms to hire talented developers if they are lagging behind on their own transition to DevOps.



Quote for the day:

"Leadership is about being a servant first." -- Allen West

Daily Tech Digest - April 17, 2021

Decoupling Frontends and Backends with GraphQL

GraphQL combines the best of APIs and Query Language. It is an API because a simple POST returns the data requested. And it is a query language because the user can ask for what she wants (as long as it is permissible in the definition of the GraphQL API endpoint). GraphQL has three distinct concepts: Types (such as Customer, Order, etc.) that the user (frontend developer) interacts with. These types are linked together in a graph — for example, a customer might have orders — hence the name GraphQL. It has an additional abstraction, an interface, that can be used to further hide types. This is particularly useful when there are multiple different implementations; Queries, such as customerById (queries are just entry points into the graph) return data of a type; and Resolvers, which describe the implementation of the queries and generation of the bits of data associated with types. For example, there might be a resolver that says the query customerById can be executed by issuing a SQL statement against a MySQL database, whereas the query orderByCustomer requires a GET against a REST endpoint.


IoT in Mining

Mining companies have overcome the challenge of connectivity by implementing more reliable connectivity methods and data-processing strategies to collect, transfer and present mission critical data for analysis. Satellite communications can play a critical role in transferring data back to control centers to provide a complete picture of mission critical metrics. Mining companies worked with trusted IoT satellite connectivity specialists such as ‘Inmarsat’ and their partner eco-systems to ensure they extracted and analyzed their data effectively. Cybersecurity will be another major challenge for IoT-powered mines over the coming years As mining operations become more connected, they will also become more vulnerable to hacking, which will require additional investment into security systems. Following a data breach at Goldcorp in 2016, that disproved the previous industry mentality that miners are not typically targets, 10 mining companies established the Mining and Metals Information Sharing and Analysis Centre (MM-ISAC) to share cyber threats among peers in April 2017.


BazarLoader Malware Abuses Slack, BaseCamp Clouds

According to researchers at Sophos, in the first campaign spotted, adversaries are targeting employees of large organizations with emails that purport to offer important information related to contracts, customer service, invoices or payroll. “One spam sample even attempted to disguise itself as a notification that the employee had been laid off from their job,” according to Sophos. The links inside the emails are hosted on Slack or BaseCamp cloud storage, meaning that they could appear to be legitimate if a target works at an organization that uses one of those platforms. In an era of remote working, those odds are good that this is the case. “The attackers prominently displayed the URL pointing to one of these well-known legitimate websites in the body of the document, lending it a veneer of credibility,” researchers said. “The URL might then be further obfuscated through the use of a URL shortening service, to make it less obvious the link points to a file with an .EXE extension.” If a target clicks on the link, BazarLoader downloads and executes on the victim’s machine. The links typically point directly to a digitally signed executable with an Adobe PDF graphic as its icon.


How the Biden Administration Can Make Digital Identity a Reality

Digital identity has already gained bipartisan support on Capitol Hill. In 2020, Representatives Bill Foster (D-IL) and John Katho (R-NY) introduced the Improving Digital Identity Act, designed to establish a nationwide approach to improving digital identity. Now, the Biden administration plans to leverage digital identity for modernization of public services, ranging from government assistance to healthcare to licensing. The act would be a step forward but wouldn't completely address needs in the public and private sectors. Rep. Foster notes that the bill would primarily address the government's need for digital identity, paying less attention to issues (e.g., transaction friction, fraud) facing enterprises and consumers. That said, the Biden administration must take a broader, holistic approach to digital identity, eliminating data siloing that would make future digital IDs unnecessarily purpose-specific. Any error would allow bad actors to access sensitive data and impersonate customers, resulting in fraudulent requests for government services, credit cards, loans, or licenses.


Manufacturing Performance Intelligence: How digital unlocks resilient, agile operations

Digital solutions have a huge role to play in enabling Industry 4.0 and driving sustainable practices. As manufacturers rapidly accelerated their adoption of digital operating models, they have been able to safeguard employee health, ensure commercial resilience and elevate performance using digital intelligence. This is the new opportunity for industries and AVEVA’s portfolio combines the operational data management of PI System with industrial analytics, enabling us to lead the way. By harnessing the power of information with artificial intelligence and human insight, AVEVA is leading the industry with Performance Intelligence. Schneider Electric’s network of Smart Factories was among the world’s first to transform operations, pioneering AVEVA’s Discrete Lean Management software and pivoting to cloud-based operating models to safeguard production. These changes transformed how we operate, cutting downtime by 44% and driving 21% increases in energy efficiency in key factories. The World Economic Forum recognized three Smart Factories as Advanced Manufacturing Lighthouses as a result


Designing & Managing for Resilience

The concept of shared capacity and reciprocity within an organization is more complex than simply directing teams to work together. Many organizations do have cross-functional work teams or attempt to break down organizational silos by rotating executives throughout the business. However, organizations are defined by reporting structures, functional units or product teams - where each have their own goals and objectives. In addition, an engineering leader is tasked with setting direction, vision and priorities for their teams for a given quarter or phase of the business lifecycle which may put them at different tempos than their counterparts. Systemic and difficult problems that span organizational boundaries can be emergent or continuously changing as different teams make attempts to mitigate the problems within their own scope of authority. This can make it difficult to coordinate clear goals and objectives with peers for inter-organizational initiatives. Therefore, a function of the resilient leader is to advocate for capacity sharing and reciprocity as part of their team’s goals and priorities. 


Cyber security for telehealth services

The goal of cybersecurity is to reduce the risk of cyber-attacks and to protect organizations and individuals from intentional and deliberate exploitation of security vulnerabilities in systems, networks, and technologies. You are done with teleconsultation on Practo and now you are about to checkout and you are offered cash withdrawal options with your debit or credit card or UPI, and like you, there are millions of users who are sharing such sensitive information on the platform, have you ever wondered how secured the information on practo is? From updated privacy policies to security-focused patents to use AI for Data Security each company increases its focus on data protection to promote user trust. With the increasing growth in the digital world, cybersecurity threats will continue to intensify as hackers learn to adapt to security strategies. This will increase the overall need for cybersecurity by companies that will be paying more and more highly qualified security professionals to protect their vulnerable assets from cyber-attacks. Telehealth means you no more have to travel, your appointment with the physicians takes place through a TV screen in between you.


Beyond the Quickstart: Running Apache Kafka as a Service on Kubernetes

Kubernetes provides many networking options such as node ports, ingress, load balancers and, with Red Hat OpenShift, routes as well. Kafka requires the producers and consumers to talk to individual brokers based on the placement of partitions and partition leaders. Based on the different networking options, you have to configure your network correctly so that the producers and consumers are able to individually address the brokers. Kafka exposes the “advertised.listeners” option in the broker configuration, which allows the clients to directly connect to the brokers. When configuring the Kubernetes services to allow access to the brokers, you will also configure the “advertised.listeners” in the broker to ensure that producers and consumers are able to connect to the individual brokers. Kubernetes abstracts infrastructure, following an interface pattern wherein third-party providers can create their own plugins that follow a standard interface definition. So you could also build your own routing layer to make sure you are able to address the brokers. Kubernetes allows you to do this via ingress resources.


Using The Internet Of Things For Smart Office Automation

Scheduling is critical in a post-COVID office. IoT technology makes it much easier to keep staff at an optimum number of people throughout the day to ensure compliance with safety practices. Companies can create a check-in process and monitor any potential warning signs. This system enables companies to keep track of who was in the same room and parked their cars using smart parking solutions. Smart scheduling can cut down overtime and stagger start and leave times so that people can have a more flexible schedule while keeping the number of people in the same areas at a minimum. Smart scheduling can automatically create a master plan that considers all staff members’ preferences and meets the company’s overall requirements. Smart scheduling for IoT-enabled devices and networks is useful in a post-COVID office environment. Companies can automatically create schedules for IoT items needed to match employee schedules. This is convenient if employees call in sick because their workspaces can adjust automatically if they are not at work. Making real-time changes to IoT schedules is one of the best uses of smart office technology.


Bank Groups Object to Proposed Breach Notification Regulation

The four banking groups contend that compliance with the new regulation would prove too burdensome for financial institutions. "We share the goal to develop a flexible incident notification framework offering early awareness of disruptions, while also being appropriately scoped to avoid over-reporting and unnecessary burden for the banking industry, third-party service providers and the supervisory community," the groups wrote. The proposed regulation bases its definition of a reportable computer security incident on the National Institute of Standards and Technology's definition. The NIST definition is: "An occurrence that results in actual or potential jeopardy to the confidentiality, integrity or availability of an information system or the information the system processes, stores or transmits or that constitutes a violation or imminent threat of violation of security policies, security procedures or acceptable use policies." The four financial groups wrote that the NIST definition is too broad, and if it's included in a breach notification requirement, it would result in insignificant occurrences becoming reportable incidents.



Quote for the day:

"Effective team leaders realize they neither know all the answers, nor can they succeed without the other members of the team." --  Katzenbach & Smith

Daily Tech Digest - April 15, 2021

Cyber criminals are targeting the cloud — here’s how to defend against them

While threat actors may use methods to actively infiltrate a company’s defences, sometimes the vulnerabilities are already there. CSPs are usually quick to patch known vulnerabilities without requiring customer interaction. However, when cloud services involve the customer in managing the software, oversight can be tricky due to the complexity of the environment. Businesses should prioritise regular scanning and patching of known vulnerabilities with the latest version of each type of software they’re running on their system. On top of this, IT leaders should maintain an up-to-date inventory of assets to ensure visibility of all endpoints that require patching. In the rush to gain a competitive advantage, cloud environments are evolving rapidly with organisations using a hybrid or multi-cloud approach. This complexity can lead to misconfiguration if set up incorrectly. Misconfigured cloud infrastructures can expose data or resources to the public internet, and failure to implement encryption or multi-factor authentication can allow actors to access cloud-related tools, data, assets, or systems.


6 Key Forces Shaping Technology and Service Providers Through 2025

COVID-19 shut down businesses, supply chains and entire countries, and changed the way companies buy, sell and work. But other events like trade wars, legislation, and regulations can all impact technology providers. It’s not about predicting these events as much as identifying existing trends (i.e., work from home, e-commerce) that will accelerate if these occur. Customers demand products and services that meet their particular business or IT needs. In a broader sense, customer demand and expectations are shaped by cultural changes and world events (e.g., stay-at-home orders increasing demand for distributed work tools). User experiences and trends like mobility or subscription and freemium pricing, which were made popular in consumer markets, have led IT customers to want the same benefits from technology providers. Emerging technologies may seem like novelties when they first appear, but when these technologies become a trend, they can profoundly shape buying and selling behavior and enable new business models. Over the next several years, today’s immature technologies and “weak signals” have the potential to disrupt what your product does, who it serves and how you deliver it.


Fast Data - It’s Not Your Grandfather’s Operational Data

Fast data enables full-circle delivery of data that is “in motion.” In other words, it’s generated and consumed instantly by interactive applications running on large numbers of devices. Fast data enables organizations to act on insights gained from user interactions as these insights are generated at the point of the interaction. And because decisions or actions take place right at the front-end, fast data architectures are, by definition, distributed and real-time. Big data is focused on capturing data, storing it, and processing it periodically in batches. A fast data architecture, on the other hand, processes events in real time. Big data focuses on volume, while with fast data, the emphasis is on velocity. Here’s an example. A credit card company might want to create credit risk models based on demographic data. That’s a big data challenge. A fast data architecture would be required if that credit card company wants to send fraud alerts to customers in real-time, when a suspicious activity occurs in their accounts. Think of FedEx. To track millions of packages and ensure on-time and accurate delivery across the planet, FedEx needs access to the right real-time data to perform real-time analysis and deliver the right interaction—right away, right there, not a day later.


Better Software Writing Skills in Data Science: Dead Programs Tell No Lies

When you figure out that something “bad” happen to your program, you know this should not happen and you know there is no way around, the way to throw and exception higher can be via an assert. This will throw an exception to the higher up program which will have to decide what to do with it. An example would be you expect an int as input and you get a string, this is contract breaking, the higher up program ask you to handle improper data, why would your program decide why the contract was broken? It should be the responsibility of the caller to handle that exception properly. That might warrant a assert right there. Depending on the organization you work in, when an how to use exceptions and asserts might get philosophical. On the other hand it could also be subject to very specific rules. There might be really valid reason why an organization might prefer an approach over another. Learn the rules, and if there is no rule, have discussion around it and apply your best judgement. In any case, dead programs tells no lies. Better kill it than having to deal with polluted data a year in the future.


How to avoid social engineering scams

Social engineering is a collective term for ways in which fraudsters manipulate people into performing certain actions. It’s generally used in an information security context to refer to the tactics crooks use to trick people into handing over sensitive information or exposing their devices to malware. This often comes in the form of phishing scams – messages supposedly from a legitimate sender that ask the recipient to download an attachment or follow a link that directs them to a bogus website. However, social engineering isn’t always malicious. For example, say you need someone to do you a favour, but you’re unsure that they’ll agree if you ask them apropos of nothing. You might grease the wheels by offering to do something for them first, making them feel obliged to say yes when you ask them to return the favour. That’s a form of social engineering. You’re performing an action that will compel the person to do something that will benefit you. Understanding social engineering in this context helps you see that social engineering isn’t simply an IT problem. It’s a vulnerability in the way we make decisions and perceive others – something we delve into more in the next section.


Mesh networking vs. traditional Wi-Fi routers: What is best for your home office?

Before changing your setup, you should also consider your ISP package. If you're subscribed to a low-speed offering, new equipment is not going to necessarily help. Instead, package upgrades could be a better option. If you are a sole user and need a stable, powerful connection -- such as for resource-hungry work applications or gaming -- a traditional router may be all you need. Wired should be quicker than wireless, and so investment in a simple Ethernet cable, easily picked up for $10 to $15, could be enough. Wi-Fi range extenders, too, could be considered as an alternative to mesh if you just need to boost coverage in some areas, and will likely be less expensive than purchasing individual mesh nodes. Some vendors also offer mesh 'bolt-ons' such as Asus' AiMesh, which can connect up existing routers to create a mesh-like coverage network without ripping everything out and starting again. However, mesh networking is here to stay and at a time when many of us are now in the home rather than traditional home offices, a mesh setup could be a future-proof investment. 


Intel Report Spotlights Importance Of Transparency In Cybersecurity

Transparency and security assurance are important, but the Intel report also reveals other factors that businesses consider for endpoint and network infrastructure purchasing decisions. Interoperability with existing tools and platforms ranked highest with 63%, followed by installation cost (58%), system complexity (57%), vendor support (55%), and scalability issues (53%). One area that is particularly interesting is the intersection of hardware and software and how they can work together to solve cybersecurity problems in innovative ways. More than three-fourths of the survey participants indicated that it is highly important for technology providers to offer hardware assisted capabilities to mitigate software exploits. More than 70% also noted that it is important for technology providers to offer mechanisms and security controls to protect distributed workloads. Suzy Greenberg, Vice President of Intel Product Assurance and Security for Intel, joined me recently on the TechSpective Podcast to talk about this report and some of the insights and trends she finds interesting.


Security Bug Allows Attackers to Brick Kubernetes Clusters

The impact could be fairly wide: “As of Kubernetes v1.20, Docker is deprecated and the only container engines supported are CRI-O and Containerd,” Sasson explained. “This leads to a situation in which many clusters use CRI-O and are vulnerable. In an attack scenario, an adversary may pull a malicious image to multiple different nodes, crashing all of them and breaking the cluster without leaving a way to fix the issue other than restarting the nodes.” When a container engine pulls an image from a registry, it first downloads its manifest, which has the instructions on how to build the image. Part of that is a list of layers that compose the container file system, which the container engine reads and then downloads and decompresses each layer. “An adversary could upload to the registry a malicious layer that aims to exploit the vulnerability and then upload an image that uses numerous layers, including the malicious layer, and by that create a malicious image,” Sasson explained. “Then, when the victim pulls the image from the registry, it will download the malicious layer in that process and the vulnerability will be exploited.” Once the container engine starts downloading the malicious layer, the end result is a deadlock.


What is the market opportunity for NFTs?

NFTs have seen massive increases in trade volume and users in recent times. Investments in NFTs rose by 299% across 2020, and the NFT market’s sales volume grew by 2,882% in February alone. This increased interest resulted in part of the improving infrastructure surrounding NFTs, which has supported full stack services from trading venues, minting platforms, marketplaces and more. While detractors have suggested that the current crest of interest represents a bubble, experts have pointed to the fact that technology of NFTs is strong enough to survive a possible crash, and is expected to be around for quite some time. According to Beeple, a digital artist who recently made almost $70 million from his NFT sale, the technology will support any work or piece of real value. Similarly, the new owners of Beeple’s record setting piece of artwork, Vignesh (Metakovan) and Anand (Twobadour) believe their transaction represents a paradigm shift in how the world perceives art. They see NFTs as having an equalizing effect between the traditionally dominant West and the global South.


Applications, Challenges For Using AI In Fabs

One of the main applications for machine learning is defect detection and classification. The first step is using machine learning to detect actual defects and ignore noise. We are seeing many examples where machine learning is much better at extracting the actual killer defect signal from a noisy background of process and pattern variations. The second step is to leverage machine learning to classify defects. The challenge these days is that when optical inspectors run at high sensitivity to capture the most subtle, critical defects on the most critical layers, other anomalies are also detected. Machine learning is first applied to the inspection results to optimize the defect sample plan sent for review. Then, high-resolution SEM images are taken of those sites and additional machine learning is used to analyze and classify the defects to provide fab engineers with accurate information about the defect population – actionable data to drive process decisions. An emerging application is to make use of machine learning to be more predictive about where to inspect and measure.



Quote for the day:

"Real leaders are ordinary people with extraordinary determinations." -- John Seaman Garns

Daily Tech Digest - April 14, 2021

How far have we come? The evolution of securing identities

With internal, enterprise-facing identity, these individuals work for your organization and are probably on the payroll. You can make them do things that you can’t ask customers to do. Universal 2nd Factor (U2F) is a great example. You can ship U2F to everyone in your organization because you’re paying them. Plus, you can train internal staff and bring them up to speed with how to use these technologies. We have a lot more control in the internal organization. Consumers are much harder. They are more likely to just jump ship if they don’t like something. Adoption rates of technologies, like multifactor authentication, are extremely low in consumer land because people don’t know what it is or the value proposition. We also see organizations reticent to push it. A few years ago, a client had a 1 percent adoption rate of two-factor authentication. I asked, “Why don’t you push it harder?” They said that every time they have more people using two-factor authentication, there are more people who get a new phone and don’t migrate their soft token or save the recovery codes. Then, they call them up and say, “I have my username or password but not my one-time password.


Informatica debuts its intelligent data management cloud

As data becomes more valuable, so does data management, Informatica argues. Its IDMC offers more than 200 intelligent cloud services, powered by Informatica's AI engine CLAIRE. It applies AI to metadata to give an organization an understanding of its "data estate," Ghai explained. The "data estate" tells you about the fragmentation of data -- its location and the various domains of data. "And through that insight," Ghai said, Informatica will "automate the ability to connect to data, to build data pipelines, process data, provision it for analytics... to apply advanced transformations to cleanse that data and trust it... to match, merge and build a single source of truth." From there, the platform aims to make data more accessible to business users with features like the "data marketplace." With the marketplace, users can "shop for data" much as one would shop for consumer goods on the Amazon marketplace, Ghai explained. The IDMC is micro-services based and API-driven, with elastic and serverless processing. It's built for hybrid and multi-cloud environments. The platform is already running at scale, processing more than 17 trillion transactions each month.


Microservices in the Cloud-Native Era

With developer tools and platforms like Docker, Kubernetes, AWS, GitHub, etc., software development has become very approachable and easy. You have a monolithic architecture and three million lines of code. Making changes to the code base whenever required and releasing new features was not an easy task before. It created a lot of dilemmas between the developer teams. Finding the mistake that was causing the code to break was a monumental task. That’s where microservices architecture shines. Many companies have recently moved from their humongous monolithic architecture to microservices architecture for a bright future. There are many advantages of shifting to microservices architecture. While a monolithic application puts all of its functionality into a single code base and scales by replicating on multiple servers, a microservices architecture breaks an application down into several smaller services. It then segments them by logical domains. Together, microservices communicate with one another over APIs to form what appears as a single application to end-users. The problem with a monolithic application, when something goes wrong, the operations team blames development, and development blames QA.


Modern Data Warehouse & Reverse ETL

“Reverse ETL” is the process of moving data from a modern data warehouse into third party systems to make the data operational. Traditionally data stored in a data warehouse is used for analytical workloads and business intelligence (i.e. identify long-term trends and influencing long-term strategy), but some companies are now recognizing that this data can be further utilized for operational analytics. Operational analytics helps with day-to-day decisions with the goal of improving the efficiency and effectiveness of an organization’s operations. In simpler terms, it’s putting a company’s data to work so everyone can make better and smarter decisions about the business. As examples, if your MDW ingested customer data which was then cleaned and mastered, that customer data can then by copied into multiple SaaS systems such as Salesforce to make sure there is a consistent view of the customer across all systems. Customer info can also be copied to a customer support system to provide better support to that customer by having more info about that person, or copied to a sales system to give the customer a better sales experience.


The Microsoft-Nuance Deal: A new push for voice technology?

Microsoft has had its hand in voice technology since debuting its virtual assistant Cortana in 2015 as part the initial Windows 10 release. Since then, Cortana has evolved to support Android and iOS devices, Xbox, the Edge browser, Windows Mixed Reality headsets, and third-party devices such as thermostats and smart speakers. According to Microsoft, Cortana is currently used by more than 150 million people. More recently, the company shifted Cortana to position it as more of an office assistant rather than for more general use. “Voice recognition is gaining momentum and will be used in every type of industry — from transcription to command-and-control types of applications — and acquiring a leading vendor in this area just makes sense,” Pleasant said. She stressed that as users become familiar with Cortana, Siri and Amazon's Alexa at home, they expect to see similar speech-enabled technologies at work. She also noted that Microsoft is one of the few companies with the resources to acquire a company like Nuance, allowing it to jump ahead of rivals who might have wanted to do the same thing.


Get your firm to say goodbye to password headaches

In a passwordless environment, no password storage or management is needed. Therefore, IT teams are no longer burdened by setting password policies, detecting leaks, resetting forgotten passwords and having to comply with password storage regulation. It’s fair to say that for many helpdesk teams, password reset requests will be the most commonly asked-for thing (from users). Past research has determined that for some larger organizations, up to $1 million per year can be spent on staffing and infrastructure to handle password resets alone. Resetting passwords is probably not a particularly complex issue for most IT departments to deal with, but it’s the sheer number of requests makes handling these requests an extremely time-consuming task. Just how much time does that take away from helpdesks on a daily, weekly or monthly basis? It’s one of those hidden costs that your firm will be incurring that can be streamlined by giving people passwordless connections into their environment. Passwords remain a weakness for those trying to secure customer and corporate data and passwords are the number one target of cyber criminals.


Modernising the insurance industry with a shared IT platform model

Pockets of the insurance industry are heading this way, by, for example, using vehicle trackers that reward good driving with lower premiums. But behind the scenes for many organisations is a mass of hugely complex products and equally unwieldy legacy systems that don’t provide them with the ability to work in a way that is agile and digital-first. Assess your systems as they stand today and you may find that several, or possibly hundreds, have been redundant for some time. Eliminating these systems, which are nothing more than drains on the business’ resources, will allow for a greater level of agility. By moving away from cumbersome legacy systems that are no longer fit for purpose, insurers can create a simplified system that unifies silos, making everyday work more efficient, and saves the business money. Money that they can reinvest into creating a customer-centric company that can rival its strongest competitors. Imagine a world where you could simplify your product range, providing cover for the highest number of people with the fewest number of insurance products.


5 Great Ways To Achieve Complete Automation With AI and ML

The self-healing technique in test automation solves major issues that involve test script maintenance where automation scripts break at every stage of change in object property, including name, ID, CSS, etc. This is where dynamic location strategy comes into the picture. Here, programs automatically detect these changes and fix them dynamically without human intervention. This changes the overall approach to test automation to a great extent as it allows teams to utilize the shift-left approach in agile testing methodology that makes the process more efficient with increased productivity and faster delivery. ... This self-healing technique saves a lot of time invested by developers in identifying the changes and updating them simultaneously in the UI. Mentioned below is the end-to-end process flow of the self-healing technique which is handled by artificial intelligence-based test platforms. As per this process flow, the moment an AI engine figures out that the project test may break because the object property has been changed, it extracts the entire DOM and studies the properties. It runs the test cases effortlessly without anyone getting to know that any such changes have been made using dynamic location strategy.


Apache Software Foundation retires slew of Hadoop-related projects

ASF's Vice President for Marketing & Publicity, Sally Khudairi, who responded by email, said "Apache Project activity ebbs and flows throughout its lifetime, depending on community participation." Khudairi added: "We've...had an uptick in reviewing and assessing the activity of several Apache Projects, from within the Project Management Committees (PMCs) to the Board, who vote on retiring the Project to the Attic." Khudairi also said that Hervé Boutemy, ASF's Vice President of the Apache Attic "has been super-efficient lately with 'spring cleaning' some of the loose ends with the dozen-plus Projects that have been preparing to retire over the past several months." Despite ASF's assertion that this big data clearance sale is simply a spike of otherwise routine project retirements, it's clear that things in big data land have changed. Hadoop has given way to Spark in open source analytics technology dominance, the senseless duplication of projects between Hortonworks and the old Cloudera has been halted, and the Darwinian natural selection process among those projects completed.


DNS Vulnerabilities Expose Millions of Internet-Connected Devices to Attack

In a new technical report, Forescout and JSOF describe the set of nine vulnerabilities they discovered as giving attackers a way to knock devices offline or to download malware on them in order to steal data and disrupt production systems in operational technology environments. Among the most affected are organizations in the healthcare and government sectors because of the widespread use of devices running the vulnerable DNS implementations in both environments, Forescout and JSOF say. According to the two companies, patches are available for the vulnerabilities in FreeBSD, Nucleus NET, and NetX. Device vendors using the vulnerable stacks should provide updates to customers. But because it may not always be possible to apply patches easily, organizations should consider mitigation measures, such as discovering and inventorying vulnerable systems, segmenting them, monitoring network traffic, and configuring systems to rely on internal DNS servers, they say. The two companies also released tools that other organizations can use to find and fix DNS implementation errors in their own products. 




Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne

Daily Tech Digest - April 13, 2021

19 Realistic Habits To Improve Software Development

When you finish writing a fragment of code and see that it works, take some time to reread it and see if you can improve it. Think that you are going to show it to someone else who is going to evaluate your code. Would you leave it the same? One of the best code refactoring techniques is the red/green process used in Agile test-driven development. To use this technique, your code must be covered with tests. If when refactoring, something fails, the test will not pass, and you will be aware that something is wrong with your refactor. ... Plan a time interval without distractions or interruptions. Interruptions will make your mind lose track of what it is developing, and you will have to start again when you resume the activity, which will cost you extra work time and make you more prone to make mistakes. It works to leave only the IDE open and a browser with a maximum of two tabs. ... Don’t try to write clever code that only you understand. Write code that someone else can read and understand. It doesn’t matter if your code has a few more lines if they’re necessary to make it understood better. Remember that in a few months, you or someone else on your team may have to modify the code, and if it is not easy to understand, it will not be easy to modify.


Clear & Present Danger: Data Hoarding Undermines Better Security

Even though there is overlap between the users of big companies' services and the customers of small businesses, the big companies aren't sharing their data. As a result, customers who use smaller businesses are left to fend for themselves. A few companies are trying to change that. Deduce (disclosure, another company I've consulted for) created a data collective through which companies can share information about user's security-related behavior and logins. In exchange for sharing data with the platform, companies get access to Deduce's repository of identity data from over 150,000 websites. They can use this shared data to better detect suspicious activity and alert their users, just like Microsoft and Google do using their own data. In a different approach to helping businesses identify suspicious users, LexisNexis created unique identifiers for their clients' customers. Using these identifiers, their clients can share trust scores that indicate if a particular user is suspicious. If a suspicious user attempts to log in to a website, the site can block that user to keep themselves and their legitimate users safer.


Optimizing the CIO and CFO Relationship“

CIOs are more likely to be pioneers and/or integrators, while CFOs are more likely to be guardians and drivers,” according to consultancy Deloitte in a description of different corporate personality types. “Pioneers are novelty-seeking, they like having a variety of possibilities, generating new ideas….On the other hand, the guardian personality values structure and loyalty, are much more methodical, detail-oriented, and perhaps a little more risk-averse.” ... CFOs understand that they have to change and expand their skills,” said Mastanuono. “The modern CFO understands technology and how it can transform the business. He or she also needs to understand the future of what finance will look like, and be a transformer of people, processes, and systems. The CFO must move from being a reactive to a proactive collaborator so the end business can be positioned to have the right systems and data at the right time. Breaking down silos and developing empathy and cross-functional collaboration are requirements, and the CFO-CIO relationship is a critical piece.” ... If CFOs and CIOs can develop a common approach to IT investments that looks at strategic risks as well as benefits, it creates common ground for project discussions and evaluations.


How to address post-pandemic infrastructure pain points

Managing workforce transformation is already challenging enough for employees who need to access on-premises resources. It becomes even more difficult if these employees work in regulated sectors, as medical and financial organizations need to track their employees’ identities, access requests, and usage to an even greater degree. Moreover, because there’s no one set of global standards, IT teams will need to account for many different compliance frameworks that vary based on where an employee is sitting, what information they’re accessing, and what sector they’re working in. On top of that, as businesses build new infrastructures that can accommodate and monitor permanently remote workers, they must be mindful of how certain regulations affect what personally identifiable information they can record about their own employees. GDPR, CCPA, and other privacy laws predate the pandemic, but like workforce transformation, they’ve become even starker and more commonplace challenges now. Different jurisdictions will have different mandates, and your IT teams will need to account for them all.


12 steps towards a secure project management framework

Cyber security is a tech-heavy domain, and project/program management is essential to deliver successful projects. However, cyber security requires a few tweaks in regular management practices as it comes with a different set of requirements. Cyber security is a security management program that is complex in nature and entails systematic processes. It deals with all aspects of a company’s operations, from mapping and recruiting skilled security professionals to vendor risk management. It involves protecting and securing computer systems, networks, and data from theft or damage, thereby ensuring business continuity. A project manager usually has to oversee many one-time and recurring cyber security tasks while handling usual responsibilities and priorities. A good project management framework will ensure that projects are delivered smoothly, without exceeding budgets, and are carried out in the timeframe decided. For any project management program to be successful, it’s important to define roles and responsibilities, a detailed plan of action, and milestones to be achieved.While most of the standard project management practices hold good in cyber security programs, there are a few cyber security-specific aspects that need to be taken care of with absolute diligence and strict adherence.


Information Relativity

Relativity was introduced at the beginning of the last century when Einstein proved that reality is fundamentally different depending on your frame of reference, a distortion of the spacetime continuum. The concept has led to the discovery of black holes, gravitational lenses, time dilation, and all kinds of other fantastic things. Relativity is not at all what one would expect based on our regular day-to-day lives that operate according to classic laws of physics. It changes what it means to observe and to be an observer—it means that how we experience the world differs not just in how we interpret it. There are circumstances where the world I experience is inconsistent with yours. It turns out that communication has these same circumstances that also work in this same peculiar way. Information is distorted depending on the location of the observer. Mark Burgess calls this “information relativity”: messages can take multiple paths and interfere with one another, information can be reversed in its order as it travels along one path, the speed of communication can be different from the speed of communication on another path. 


The Role of EiPaaS in Enterprise Architecture: Part 1

When discussing enterprise architecture, a diagram of the IT landscape comes to mind because that is the standard approach to defining an architecture. However, during our work with a number of enterprise architecture teams worldwide, we discovered that enterprise architecture has a larger strategic scope than what typical IT diagrams capture. Fundamentally, enterprise architecture converts business strategy into a value generation outcome by creating a foundation to execute various IT initiatives and processes. It is about gaining a long-term view for the organization, including the integration and standardization of various elements involved in the business. ... At the initial stages, an enterprise architecture will define the systems and subsystems required for each organization’s function. It starts with purchasing core systems, such as human resource management (HRM), customer relationship management (CRM) and/or enterprise resource planning (ERP) based on the business domain of the organization. In addition, subsystems will be built around the core systems by in-house or outsourced development teams. Systems and subsystems that belong to each function operate independently with limited or no information exchange.


Nvidia announces Morpheus, an AI-powered app framework for cybersecurity

Morpheus essentially enables compute nodes in networks to serve as cyberdefense sensors — Nvidia says its newly announced BlueField-3 data processing units can be specifically configured for this purpose. With Morpheus, organizations can analyze packets without information replication, leveraging real-time telemetry and policy enforcement, as well as data processing at the edge. Thanks to AI, Morpheus can ostensibly analyze more security data than conventional cybersecurity app frameworks without sacrificing cost or performance. Developers can create their own Morpheus skills using deep learning models, and Nvidia says “leading” hardware, software, and cybersecurity solutions providers are working to optimize and integrate datacenter security offerings with Morpheus, including Aria Cybersecurity Solutions, Cloudflare, F5, Fortinet, Guardicore Canonical, Red Hat, and VMware. Morpheus is also optimized to run on a number of Nvidia-certified systems from Atos, Dell, Gigabyte, H3C, HPE, Inspur, Lenovo, QCT, and Supermicro. Businesses are increasingly placing their faith in defensive AI like Morpheus to combat the growing number of cyberthreats.


Automation will accelerate decentralization and digital transformation

As the vaccinated population grows, doors reopen, and more people come together again, the reality we find ourselves in will not be the one left behind in 2019. Many long for a return to in-person experiences, but at the same time, have grown accustomed to the flexibilities of a decentralized, digital-first world. As we emerge from lockdown, hitting "rewind" will not satisfy customer and employee needs. Instead, companies must create hybrid experiences that integrate both digital and in-person modalities. In addition, the growing expectations of stakeholders has created unprecedented demand for IT innovation and greater sense of urgency in the post-pandemic world. Even as more offline activities resume, 2020's rapid digitalization will have a large and lasting impact on both customer and employee experiences. For example, analysis of global research from Salesforce shows customers anticipate engaging online with companies just as much in 2021 as they did in 2020. That customers expect to maintain this substantial departure from their 2019 patterns suggests that the swing to digital at the height of the pandemic wasn't purely due to unavailability of in-person channels.


How data poisoning attacks corrupt machine learning models

The main problem with data poisoning is that it's not easy to fix. Models are retrained with newly collected data at certain intervals, depending on their intended use and their owner's preference. Since poisoning usually happens over time, and over some number of training cycles, it can be hard to tell when prediction accuracy starts to shift. Reverting the poisoning effects would require a time-consuming historical analysis of inputs for the affected class to identify all the bad data samples and remove them. Then a version of the model from before the attack started would need to be retrained. When dealing with large quantities of data and a large number of attacks, however, retraining in such a way is simply not feasible and the models never get fixed, according to F-Secure's Patel. "There's this whole notion in academia right now that I think is really cool and not yet practical, but we'll get there, that's called machine unlearning," Hyrum Anderson, principal architect for Trustworthy Machine Learning at Microsoft, tells CSO. "For GPT-3 [a language prediction model developed by OpenAI], the cost was $16 million or something to train the model once.



Quote for the day:

"It's not about how smart you are--it's about capturing minds." -- Richie Norton