Daily Tech Digest - March 07, 2021

AI projects yield little business value so far

David Semach, a partner and head of AI and automation at Infosys Consulting for Europe, the Middle East and Africa (EMEA), agrees with the researchers that satisfaction with the technology in a financial sense is often quite low, partly because organisations “are mostly still experimenting” with it. This means it tends to be deployed in pockets rather than widely across the business. “The investment required in AI is significant, but if it’s just done in silos, you don’t gain economies of scale, you can’t take advantage of synergies and you don’t realise the cost benefits, which means it becomes a cost-prohibitive business model in many instances,” says Semach. Another key issue here is the fact that most companies “mistakenly” concentrate on using the software to boost the efficiency of internal processes and operating procedures, rather than for generating new revenue streams. “Where companies struggle is if they focus on process efficiencies and the bottom line because of the level of investment required,” says Semach. “But those that focus on leveraging AI to create new business and top-line growth are starting to see longer-term benefits.” 


How Optimizing MLOps can Revolutionize Enterprise AI

With database-deployment, it only takes one line of code to deploy a model. The database-deployment system automatically generates a table and trigger that embody the model execution environment. No more messing with containers. All a data scientist has to do is enter records of features into the system-generated predictions database table to do inference on these features. The system will automatically execute a trigger that runs the model on the new records. This saves time for future retraining too, since the prediction table holds all the new examples to add to the training set. This enables predictions to stay continuously up-to-date, easily--with little to no manual code. ... The other major bottleneck in the ML pipeline happens during the data transformation process: manually transforming data into features and serving those features to the ML model is time-intensive and monotonous work. A Feature Store is a shareable repository of features made to automate the input, tracking, and governance of data into machine learning models. Feature stores compute and store features, enabling them to be registered, discovered, used, and shared across a company.


How to get started in quantum computing

Unlike binary bits, qubits can exist as a ‘superposition’ of both 1 and 0, resolving one way or the other only when measured. Quantum computing also exploits properties such as entanglement, in which changing the state of one qubit also changes the state of another, even at a distance. Those properties empower quantum computers to solve certain classes of problem more quickly than classical computers. Chemists could, for instance, use quantum computers to speed up the identification of new catalysts through modelling. Yet that prospect remains a distant one. Even the fastest quantum computers today have no more than 100 qubits, and are plagued by random errors. In 2019, Google demonstrated that its 54-qubit quantum computer could solve in minutes a problem that would take a classical machine 10,000 years. But this ‘quantum advantage’ applied only to an extremely narrow situation. Peter Selinger, a mathematician and quantum-computing specialist at Dalhousie University in Halifax, Canada, estimates that computers will need several thousand qubits before they can usefully model chemical systems.


Non-Invasive Data Governance Q&A

Data Governance can be non-invasive if people are recognized into the role of data steward based on their existing relationship to the data. People define, produce and use data as part of their everyday jobs. People automatically are stewards if they are held formally accountable for how they define, produce and use data. The main premise of being non-invasive is that the organization is already governing data (in some form) and the issue is that they are doing it informally, leading to inefficiency and ineffectiveness in how the data is being governed. For example, people who use sensitive data are already expected to protect that data. The NIDG approach assure that these people know how data is classified, and that people follow the appropriate handling procedures for the entire data lifecycle. You are already governing but you can do it a lot better. We are not going to overwhelm you with new responsibilities that you should already have. ... The easiest answer to that question is that almost everybody looks at governance like they look at government. People think that data governance has to be difficult, complex and bureaucratic, when the truth is that it does NOT have to be that way. People are already governing and being governed within organizations, but it is being done informally. 


Apache Kafka in a Smart City Architecture

A smart city functions alongside various interfaces, data structures, and technologies. Many high-volume data streams must be integrated, correlated, and processed in real-time. Scalability and elastic infrastructure are essential for success. Many data streams contain mission-critical workloads and must offer reliability, zero data loss, and persistence. An event streaming platform based on the Apache Kafka ecosystem provides all these capabilities. ... A smart city requires more than real-time data integration and real-time messaging. Many use cases are only possible if the data is also processed continuously in real-time. That's where Kafka-native stream processing frameworks such as Kafka Streams and ksqlDB come into play. ... The public sector and smart city architectures leverage event streaming for various use cases. The reasons are the same as in all other industries: Kafka provides an open, scalable, elastic infrastructure. Additionally, it is battle-tested and runs in every infrastructure (edge, data center, cloud, bare metal, containers, Kubernetes, fully-managed SaaS such as Confluent Cloud). But event streaming is not the silver bullet for every problem. Therefore, Kafka complements other technologies such as MQTT for edge integration, or a cloud data lake for batch analytics.


Remote work: 4 ways to spot a toxic culture

Trust is the fundamental element of a high-performing culture. Especially in a remote workplace, it’s difficult to be a lone wolf and not collaborate on projects. If you notice that your team is avoiding working with someone, look to see if it’s a pattern. Perhaps that individual is “phoning it in” or making too many mistakes, and the team can’t trust their work anymore. You need to address this right away to avoid disappointing the rest of the team. Ask yourself: How much do folks enjoy redoing someone else’s work? Or watching the employee screw up and get away with it? Or questioning why they are working so hard while others aren’t? Worse, your team members may start wondering if they can trust you as a manager if you won’t handle the problem. ... When you hear someone make a statement that may be judgmental, ask the person, “What do you know to be true? What are the facts?” A good way to tell if someone is stating facts or judgments is to apply the “videotape test:” Can what they describe be captured by a video camera? For example, “He was late for the meeting” is a fact and passes the test. In contrast, “He’s lazy” is a judgment and doesn’t pass the test. Be mindful when you’re hearing judgments and try to dig out the facts.


Changing the AI Landscape Through Professionalization

Organizations should work with Data Architect, Business owners and Solution architect to develop their AI strategy underpinned by Data strategy, Data Taxonomy and analyzing the value that their company can and wish to create. For “Establishing a Data Driven culture is the key—and often the biggest challenge—to scaling artificial intelligence across your organization.” While your technology enables business, your workforce is the essential driving force. It is crucial to democratize data and AI literacy by encouraging skilling, upskilling, and reskilling. Resources in the organization would need to change their mindset from experience-based, leadership driven decision making to data-driven decision making, where employees augment their intuition and judgement with AI algorithms’ recommendations to arrive at best answers than either humans or machines could reach on their own. My recommendation would be to carve out “System of Knowledge & Learning” as a separate stream in overall Enterprise Architecture, along with System of Records, Systems of Engagement & Experiences, Systems of Innovation & Insight. AI and data literacy will help in increasing employee satisfaction because the organization is allowing its workforce to identify new areas for professional development.


How Skyworks embraced self-service IT

At Skyworks, the democratization of IT is all about giving our business users access to technology—application development, analytics, and automation—with the IT organization providing oversight, but not delivery. IT provides oversight in the form of security standards and release and change management strategies, which gives our business users both the freedom to improve their own productivity and the assurance that they are not reinventing the automation wheel across multiple sites. COVID has been a real catalyst of this new operating model. As in most companies, when COVID hit, we started to see a flurry of requests for new automation and better analytics in supply chain and demand management. Luckily for us, we had already started to put the foundation in place for our data organization, so we were able capitalize on this opportunity to move into self-service. ... For IT to shift from being order-takers to enablers of a self-service culture, we created a new role: the IT business partner. We have an IT business partner for every function; these people participate in all of the meetings of their dedicated function, and rather than asking “What new tool do you need?”, they ask, “What is the problem you are trying to solve?” IT used to sit at the execution table; with our new IT business partners, we now sit at the ideation table.


12 Service Level Agreement (SLA) best practices for IT leaders

Smart IT leaders understand that negotiation is not concession. It’s critical to reach a mutually agreed pathway to providing the service that the client expects, says Vamsi Alla, CTO at Think Power Solutions. In particular, IT leaders should work with providers on penalties and opt-out clauses. “A good SLA has provisions for mutually agreed-upon financial and non-financial penalties when service agreements are not met,” Alla says. Without that, an SLA is worth little more than the paper on which it’s written. ... “The most common mistake is to include performance metrics that are properly reviewed and unattainable,” Alla says. “This is usually done because the client has asked for it and the service provider is too willing to oblige. This may cause the contract to come through, but the road to execution becomes bumpy as the metrics can’t be achieved.” The level of service requested directly impacts the price of the service. “It’s important to understand what a reasonable level of performance is in the market so as not to over-spec expectations,” Fong of Everest Group says.


Security starts with architecture

Security must be viewed as an organizational value that exists in all aspects of its operation and in every part of the product development life cycle. That includes planning, design, development, testing and quality assurance, build management, release cycles, the delivery and deployment process, and ongoing maintenance. The new approach has to be both strategic and tactical. In strategic terms, every potential area of vulnerability has to be conceptually addressed through holistic architectural design. During the design process, tactical measures have to be implemented in each layer of the technology ecosystem (applications, data, infrastructure, data transfer, and information exchange). Ultimately, the responsibility will fall in the hands of the development and DevOps teams to build secure systems and fix security problems. The strategic and tactical approach outlined will allow them to handle security successfully. Security policies must be applied into the product development life cycle right where code is being written, data systems are being developed, and infrastructure is being set up.



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose

Daily Tech Digest - March 06, 2021

From Agile to DevOps to DevSecOps: The Next Evolution

While Agile and DevOps share common goals, they have not always agreed on how to achieve those goals. DevOps differs in many respects from Agile, but, at its best, DevOps applies Agile methodologies, along with lean manufacturing principles, to speed up software deployment. One area of particular tension between Agile and DevOps is that the latter relies heavily on tools; in particular, when it comes to the automation of testing and deployment processes. But DevOps can overcome the resistance of Agile developers to tool usage simply by applying Agile principles themselves. Effectively, DevOps proponents must convince Agile teams that dogmatic adherence to the underlying principles of Agile is actually inconsistent with Agile in the first place. Ironically, Agile developers who insist that the process is always bad actually violate Agile principles by refusing to acknowledge the benefits offered through change, another Agile principle. The challenge is to have the Agile development teams trust in the automation efforts of DevOps, while at the same time encouraging the DevOps team to consider the business goals of deployment rather than pursuing speed of deployment for its own sake.


Geoff Hinton’s Latest Idea Has The Potential To Break New Ground In Computer Vision

According to Dr. Hinton, the obvious way to represent the part-whole hierarchy is by combining dynamically created graphs with neural network learning techniques. But, if the whole computer is a neural network, he explained, it is unclear how to represent part-whole hierarchies that are different for every image, if we want the structure of the neural net to be identical for all images. Capsule networks introduced by Dr.Hinton a couple of years ago offer a solution: A group of neurons, called a capsule, is set aside for each possible type of object or part in each region of the image. However, the problem with capsules is they use a mixture to model the set of possible parts. The computer will have a hard time answering questions like “Is the headlight different from the tyres and more such questions” (more on this example in the next section). The recent work on Neural Field offers a simple way to represent values of depth or intensity of the image. It uses a neural network that takes as input a code vector representing the image along with image location and outputs the predicted value at that location.


Addressing Security Throughout the Infrastructure DevOps Lifecycle

Keep in mind that developer-first security doesn’t preclude “traditional” cloud security methods — namely monitoring running cloud resources for security and compliance misconfigurations. First of all, unless you’ve achieved 100% parity between IaC and the cloud (unlikely), runtime scanning is essential for complete coverage. You probably still have teams or parts of your environment — maybe legacy resources — that are still being manually provisioned via legacy systems or directly in your console and so need to be continuously monitored. Even if you are mostly covered by IaC, humans make mistakes and SREmergencies are bound to happen. We recently wrote about the importance of cloud drift detection to catch manual changes that result in unintentional deltas between code configuration and running cloud resources. Insight into those resources in production is essential to determine those potentially risky gaps. Runtime scanning also has some advantages. Because it follows the actual states of configurations, it’s the only viable way of evaluating configuration changes over time when managing configuration in multiple methods. Relying solely on build-time findings without attributing them to actual configuration states in runtime could result in configuration clashes.


Privacy breaches in digital transactions: Examination under Competition Law or Data Protection Law?

As long as the search engines’ search data is kept secret, no rival or would be rival or entrant, will have access to this critical ‘raw material’ for search innovation. Further, when transactions take place in the digital economy, firms generally tend to collect personal as well as non-personal data of users in exchange for services provided. While it can be argued that personal data is probably collected with the user’s consent, usually, collection of non-personal data happens without the consent or knowledge of the consumers. Data is further compromised when businesses that have large amounts of data merge or amalgamate, and when dominant firms abuse their market position and resort to unethical practices. Traditional Competition Law analysis involves a wide focus on ‘pricing models’ i.e., methods used by business players to determine the price of their goods or services. User data forms part of the ‘non-pricing model’. With the Competition Act, 2002 undergoing a number of changes owing to technological developments, there is a possibility that non-pricing models are also considered under the ambit of the Act.


GraphQL: Making Sense of Enterprise Microservices for the UI

GraphQL has become an important tool for enterprises looking for a way to expose services via connected data graphs. These graph-oriented ways of thinking offer new advantages to partners and customers looking to consume data in a standardized way. Apart from the external consumption benefits, using GraphQL at Adobe has offered our UI engineering teams a way to grapple with the challenges related to the increasingly complicated world of distributed systems. Adobe Experience Platform itself offers dozens of microservices to its customers, and our engineering teams also rely on a fleet of internal microservices for things like secret management, authentication, and authorization. Breaking services into smaller components in a service-oriented architecture brings a lot of benefits to our teams. Some drawbacks need to be mitigated to deploy the advantages. More layers mean more complexity. More services mean more communication. GraphQL has been a key component for the Adobe Experience Platform user experience engineering team: one that allows us to embrace the advantages of SOA and helping us to navigate the complexities of microservice architecture.


Serverless Functions for Microservices? Probably Yes, but Stay Flexible to Change

When serverless functions are idle they cost nothing (“Pay per use” model). If a serverless function is called by 10 clients at the same time, 10 instances of it are spun up almost immediately (at least in most cases). The entire provision of infrastructure, its management, high availability (at least up to a certain level) and scaling (from 0 to the limits defined by the client) are provided out of the box by teams of specialists working behind the scenes. Serverless functions provide elasticity on steroids and allow you to focus on what is differentiating for your business. ... A “new service” needs to go out fast to the market, with the lowest possible upfront investment, and needs to be a “good service” since the start. When we want to launch a new service, a FaaS model is likely the best choice. Serverless functions can be set up fast and minimise the work for infrastructure. Their “pay per use” model means no upfront investment. Their scaling capabilities provide good consistent response time under different load conditions. If, after some time, the load becomes more stable and predictable, then the story can change, and a more traditional model based on dedicated resources, whether Kubernetes clusters or VMs, can become more convenient than FaaS.


Unconventional Sentiment Analysis: BERT vs. Catboost

Sentiment analysis is a Natural Language Processing (NLP) technique used to determine if data is positive, negative, or neutral. Sentiment analysis is fundamental, as it helps to understand the emotional tones within language. This, in turn, helps to automatically sort the opinions behind reviews, social media discussions, etc., allowing you to make faster, more accurate decisions. Although sentiment analysis has become extremely popular in recent times, work on it has been progressing since the early 2000s. Traditional machine learning methods such as Naive Bayesian, Logistic Regression, and Support Vector Machines (SVMs) are widely used for large-scale sentiment analysis because they scale well. Deep learning (DL) techniques have now been proven to provide better accuracy for various NLP tasks, including sentiment analysis; however, they tend to be slower and more expensive to learn and use. ... CatBoost is a high-performance, open-source library for gradient boosting on decision trees. From release 0.19.1, it supports text features for classification on GPU out-of-the-box. The main advantage is that CatBoost can include categorical functions and text functions in your data without additional preprocessing.


How to gain advanced cyber resilience and recovery across the IT ops and SecOps divide

The legacy IT world was all about protecting what they know about, and it’s hard to change. The new world is all about automation, right? It impacts everything we want to do and everything that we can do. Why wouldn’t we try to make our jobs as simple and easy as possible? When I first got into IT, one of my friends told me that the easiest thing you can do is script everything that you possibly can, just to make your life simpler. Nowadays, with the way digital workflows are going, it’s not just automating the simple things -- now we’re able to easily to automate the complex ones, too. We’re making it so anybody can jump in and get this automation going as quickly as possible. ... The security challenge has changed dramatically. What’s the impact of Internet of things (IoT) and edge computing? We’ve essentially created a much larger attack surface area, right? What’s changed in a very positive way is that this expanded surface has driven automation and the capability to not only secure workflows but to collaborate on those workflows. We have to have the capability to quickly detect, respond, and remediate.


Compute to data: using blockchain to decentralize data science and AI with the Ocean Protocol

If algorithms run where the data is, then this means how fast they will run depends on the resources available at the host. So the time needed to train algorithms that way may be longer compared to the centralized scenario, factoring in the overhead of communications and crypto. In a typical scenario, compute needs move from client side to data host side, said McConaghy: "Compute needs don't get higher or lower, they simply get moved. Ocean Compute-to-Data supports Kubernetes, which allows massive scale-up of compute if needed. There's no degradation of compute efficiency if it's on the host data side. There's a bonus: the bandwidth cost is lower, since only the final model has to be sent over the wire, rather than the whole dataset. There's another flow where Ocean Compute-to-Data is used to compute anonymized data. For example using Differential Privacy, or Decoupled Hashing. Then that anonymized data would be passed to the client side for model building there. In this case most of the compute is client-side, and bandwidth usage is higher because the (anonymized) dataset is sent over the wire. Ocean Compute-to-Data is flexible enough to accommodate all these scenarios".


Exchange Server Attacks Spread After Disclosure of Flaws

Beyond the U.S. federal government, the impact of the vulnerabilities continues to grow - and not just among the targeted sectors named by Microsoft. The company says those groups include infectious disease researchers, law firms, higher education institutions, defense contractors, policy think tanks and nongovernment organizations. Volexity, which contributed research for the vulnerability findings, first noticed exploitation activity against its customers around Jan. 6. That activity has suddenly ticked up now that the vulnerabilities are public, says Adair, CEO and founder of the firm. "The exploit already looks like it has spread to multiple Chinese APT groups who have become rather aggressive and noisy - quite a marked change from how it started with what we were seeing," he says. Threat detection company Huntress says it has seen compromises of unpatched Exchange servers in small hotels, one ice cream company, a kitchen appliance manufacturer and what it terms "multiple senior citizen communities." "We have also witnessed many city and county government victims, healthcare providers, banks/financial institutions and several residential electricity providers," writes John Hammond, a senior threat researcher at Huntress.



Quote for the day:

"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell

Daily Tech Digest - March 05, 2021

Do Disaster Recovery Sites Actually Work?

Having the right resources available during a crisis is crucial. While internal IT teams perform disaster drills and claim they are able to run operation from the remote site, the fact is that new discoveries come to the fore when a disaster strikes and top management team is usually oblivious to these issues. These so-called drills are often just a farce put on for external and internal auditors, who seldom get to the core issues that may have occurred during the drill. Most auditor reports are checklists and they are happy to tick them off and present a rosy picture to the management. It is important to remember that NSE’s press release also said that it did not invoke the disaster recovery site based on management consultations. So what actually is a disaster recovery site? In layman terms it is an alternate site (which can be within the same city or another city) which is capable of running all the operations designed in a primary site. The genesis of a disaster recovery site is the business continuity policy (BCP). This document lists all IT systems that hold the data of an organisation, its dependencies with other systems, and all the elements which are necessary to run the system


We’ll never have true AI without first understanding the brain

The vast majority of AI researchers don’t really embrace the idea that the brain is important. I mean, yes, people figured out neural networks a while ago, and they’re kind of inspired by the brain. But most people aren’t trying to replicate the brain. It’s just whatever works, works. And today’s neural networks are working well enough. And most people in AI have very little understanding of neuroscience. It’s not surprising, because it’s really hard. It’s not something you just sit down and spend a couple of days reading about. Neuroscience itself has been struggling to understand what the hell’s going on in the brain. But one of the big goals of writing this book was to start a conversation about intelligence that we’re not having. I mean, my ideal dream is that every AI lab in the world reads this book and starts discussing these ideas. Do we accept them? Do we disagree? That hasn’t really been possible before. ... An AI that can detect cancer cells is great. But is that intelligence? No. In the book I use the example of robots on Mars building a habitat for humans. Try to imagine what kind of AI is required to do that. Is that possible? It’s totally possible.


Why paying off ransomware gangs is not a good idea

So what is the alternative? Insurers, especially in the US, urge their clients to quickly and quietly pay the ransom to minimise the damage of disruption. Then insurers allow the company to claim back the ransom payment on their insurance and raise their premiums for the following year. This payment is usually handled discreetly by a broker. In essence, the ransomware ecosystem functions like a protection racket, effectively supported by insurers who are set to pocket higher premiums as attacks continue. Aside from the moral objections we might have to routinely paying money to criminals, this practice causes two important practical problems. First, it encourages complacency in cybersecurity. This complacency was best exemplified when a hacked company paid a ransom, but never bothered to investigate how the hackers had breached their system. The company was promptly ransomed again, by the same group using the very same breach, just two weeks later. Second, some ransomware gangs invest their ill-gotten gains into the research and development of better cyber-tools. Many cybersecurity researchers are concerned about the increasing sophistication of the malware used by leading cybercrime groups such as REvil or Ryuk, which are both thought to be based in Russia.


Credential exposure trends: You need a better password

The data recovered by SpyCloud researchers includes more than 4.6 billion pieces of personally identifiable information (PII), including names, addresses, birthdates, job titles, social media URLs and nearly 1.3 billion phone numbers. Criminals use PII to create fake accounts or steal someone else’s identity and then apply for lines of credit, intercept tax refunds, drain bank accounts and more. With as little as one or two pieces of PII, they can compromise a person’s identity. Despite years of advice about the importance of strong passwords, people inevitably end up reusing or recycling the same credentials for multiple sites. Outdated password complexity requirements have complicated the issue by providing people with a false sense of security when they recycle a favorite password with a few simple changes, like capitalizing the first letter and adding a 1 or ! at the end. Industry standards call for organizations to convert plaintext passwords into hashes so if they are breached, criminals can’t easily access the passwords themselves. But some hashing algorithms are computationally harder to crack than others. Unfortunately, even the strongest hashing algorithm means little when users make weak or common password choices.


The office of the future is about people not places

Experts suggests the future of employment will be a hybrid mix of office- and home-working. As many as 90% of HR leaders believe employees will carry of working remotely in the post-COVID age, says tech analyst Gartner. That shift to hybrid working is something that resonates with Paul Coby, CIO at global science and chemicals company Johnson Matthey, whose guess is that people in the future will split their time between working from home and going into the office. What that means for many of us is that the traditional nine-to-five working day at the corporate HQ isn't coming back. "It seems to me that sitting in an office doing emails doesn't seem like a great use of offices or an individual's time," says Coby. Other experts agree. Researcher CCS Insights predicts more than half of all office-based employees will still work mainly remotely through 2022. IoD research, meanwhile, suggests more than half of business leaders plan on reducing their long-term use of workplaces, with more than one in five reporting use will be significantly lower. The office that many of us knew – with its command-and-control leadership styles – is probably gone forever. Get it right, and managers could change how we all work for the better. But quite how business leaders will organise and manage the hybrid workplace of the future is still very much up for debate right now.


Why global power grids are still vulnerable to cyber attacks

“Power grids are getting increasingly vulnerable because of digitalization and the use of more smart applications,” said Daine Loh, a Singapore-based power and renewables analyst at Fitch Solutions. It’s a threat highlighted in an initial probe in India that found an October blackout in Mumbai may have been caused by cyber sabotage. That outage impacted stock markets, trains and thousands of households in the nation’s financial hub. The disruptive potential of grid failures — as seen in Texas last month due to a sudden deep freeze — makes the sector a key target, particularly for state-based hostile actors. Over the past four decades, power plants and substations have been moving from manual to automatic controls, and are increasingly being connected to public and private networks for remote access, leaving them exposed to attacks. Producers and distributors have also often been reluctant to spend on protecting themselves against low-probability attacks. “India’s power system is in urgent need of proper cybersecurity systems,” said Reji Kumar Pillai, president of India Smart Grid Forum, a think-tank backed by the federal power ministry and which advises governments, regulators and utilities. 


Version 2 of Google’s Flutter toolkit adds support for desktop and web apps

Over the course of the last year or so, the team started working on what it calls Canvas Kit. This WebAssembly-based project takes the same Skia graphics engine that powers Android and Chrome itself and makes it available to web apps. “What that’s meant is that we can now essentially bypass the core HTML — sort of the document-centric parts of the web platform — and really use the app-centric parts of the web platform without leaving [behind] things like auto-complete of text or passwords and all the things that keep the web feeling very unique,” Sneath said. On the desktop, Google is announcing that Canonical is going all-in on Flutter and making it the default choice of all its future desktop and mobile apps. Microsoft, too, is expanding its support for Flutter and working with Google on Windows support for Flutter. Given Microsoft’s interest in Android, that’s maybe no huge surprise, and indeed, Microsoft today is releasing contributions to the Flutter engine to help support foldable Android devices. In total, Google notes, there are now over 15,000 packages for Flutter and Dart from companies like Amazon, Microsoft, Adobe, Huawei, Alibaba, eBay and Square.


Data Science Environments

Environment managers are critical to the concept of virtual machines. A virtual machine (VM) can be thought of as a computer inside of your computer (Inception style). As with everything mentioned here, a virtual machine is simply a method of file management. The VM is a program that isolates itself from all the other files on the computer; it may even run a different operating system. (For example, a MacOS computer might have a virtual machine running a Windows operating system.) Virtual machines are very effective when you want to recreate a programming environment on another computer, because you don’t need to know anything about that computer’s current dependencies. You can set up a virtual machine that has exactly the files visible to it that you have on your own computer’s virtual machine. For this reason, data science heavily utilizes virtual machines; they ensure reproducibility of results. Cloud-based applications are another common example of VMs. The developer counts on the fact that their VM will be isolated from any other code living on the server hosting their application.


8 mobile security threats you should take seriously

A mobile device is only as secure as the network through which it transmits data. In an era where we're all constantly connecting to networks that might not be optimally secured — be they improperly configured home networks, for remote workers, or public WiFi networks — our information frequently isn't as protected as we might assume. Just how significant of a concern is this? According to research by Wandera, in a more typical year, corporate mobile devices use WiFi almost three times as much as they use cellular data. Nearly a quarter of devices connect to open and potentially insecure WiFi networks, and 4% of devices encounter a man-in-the-middle attack — in which someone maliciously intercepts communication between two parties — within an average month. Those numbers have dipped this past year due to reduced travel and fewer physical businesses being open during COVID, but that doesn't mean the threat is gone — or that there's no need to remain ahead of the game, even with employees working mostly from home. "Rather than relying on man-in-the-middle attack detection to be reactive, we recommend organizations take a more proactive approach to securing remote connections," says Michael Covington


Risky business: 3 timeless approaches to reduce security risk in 2021

Some organizations see migrating to the cloud as a way of creating greenfield environments that can be secured more easily than the tech-debt-laden on-premises network. Yes, cloud adoption does have many security benefits. And, yes, the cloud does represent an opportunity to do things over better. With a greenfield architecture, with best practices baked neatly into it. Only, many organizations’ lack of cloud security processes and controls can quickly eat away at the cloud’s forecasted risk reduction benefits, reducing the overall value of this opportunity. You should keep in mind that AWS S3 buckets are left unsecured and publicly available all the time. Developers have been known to leave (or hardcode) private keys and tokens in the most embarrassing places. Personnel leave files unsecured and open to the public far too often. Publicly available endpoints are improperly segmented from private networks, sometimes creating an autobahn towards the on-premises network. Many organizations don’t do a good enough job of monitoring activity in the cloud or the information flow between the cloud and the on-premises network.



Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer