Daily Tech Digest - March 13, 2020

The Digital Services Act: The Next GDPR

social media app icons on smartphone screenWhile we do not expect legislation to be complete in 2020, this year will be to a large extent where the lines around the initial proposals are drawn. Businesses need to engage now to ensure that the new Commission understands the plethora of services they are due to regulate. While the work will be led by Internal Market Commissioner Thierry Breton, it will become a joint effort across the College. With policy issues like consumer protection, disinformation, workers’ rights in the gig economy and competition also on the agenda, businesses will need to widen engagement efforts to the cabinets of Didier Reynders (Justice), Věra Jourová (Rule of Law), Nicolas Schmit (Employment) and Margrethe Vestager (Digital and Competition). Meanwhile, businesses must also be aware of the risk of the Digital Services Act becoming a belated Christmas tree bill, where policymakers in the Council and European Parliament can reopen old arguments concerning copyright or privacy. Of immediate concern to businesses is the expected consultation and communication on the scope of the DSA in the first quarter of 2020, followed by the first legislative proposals in the latter part of the year.


4 questions to determine your IT team's "electability"

group-of-people-and-communication-network-concept-human-resources-of-picture-id1196912174.jpg
Just as voters generally want candidates who reflect their values, organizations want to see an IT shop that reflects their values. For example, a financial institution that values (and needs) trust and security would suffer "organ rejection" with a technology leader that played fast and loose with security and put the overall company at significant risk. Ask yourself if your leadership style and technology organization reflect the broader company's risk appetite, speed of working and communicating, and overall culture. It's difficult to become a trusted advisor when you don't speak the same language or value the same organizational traits. While politicians who can reach across party lines seem to be an increasingly rare commodity, IT is an area ripe for cross-organizational collaboration. By virtue of working with most of the organization in some capacity, we're uniquely positioned to forge relationships that provide value to the company. Rather than acting as an order-taker who diligently implements a project for a defined stakeholder, look for opportunities to leverage the company's technology assets in new ways.


Secrets from cybersecurity pros: How to create a successful employee training program

Two Professional IT Programers Discussing Blockchain Data Network Architecture Design and Development Shown on Desktop Computer Display. Working Data Center Technical Department with Server Racks
The first step in developing a training program is finding the skills gap in your organization. Begin by determining what cybersecurity areas employees are most unfamiliar with, Papatheodorou said.  "Their needs can be assessed via an online survey, or by asking employees and managers directly," Papatheodorou said. Another avenue for preparation is looking at outcomes. "Start by deciding what outcomes you most desire, and pick the right modality of training to best meet those outcomes -- which varies per organization," Lucas said. For example, "ask the security team and leadership some questions: What are our biggest risks? What are we protecting? All of this data will help you clarify where you should start." Plaggemier said. The organization could decide to do a general cybersecurity threat overview, a basic education that could teach employees how to spot and prevent breaches. Or, depending on the company's needs, the training could be more specialized, focusing on password security, email and social media policies, and protection of company data, Papatheodorou said.


Next wave of digital transformation requires better security, automation

Binary stream passing over rows of monitors, each also displaying binary streams.
Modern networks require application services—a pool of services necessary to deploy, run, and secure apps across on-premises or multi-cloud environments. Today, 69% of companies are using 10 or more application services, such as ingress control and service discovery. Ingress control is a relatively new application service that has become essential to companies with high API call volumes. It's one of many examples of the growing adoption of microservices-based apps. Security services remain as the most widely deployed, with these in particular dominating the top five: SSL VPN and firewall services (81%); IPS/IDS, antivirus, and spam mitigation (77%); load balancing and DNS (68%); web application firewalls (WAF) and DDoS protection (each at 67%). Over the next 12 months, the evolution of cloud and modern app architectures will continue to shape application services. At the top of the list (41%) is software-defined wide-area networking (SD-WAN). SD-WAN enables software-based provisioning from the cloud to meet modern application demands. Early SD-WAN deployments focused on replacing costly multi-protocol label switching (MPLS), but there is now greater emphasis on security as a core requirement for SD-WAN.


The algorithmic trade-off between accuracy and ethics


Building fairness into algorithms requires identifying a model that minimizes unfairness. This rather tautological quest is pursued by purposely imposing restraints on the algorithm, such as equalizing the false rejection rate for bank loans across different groups of people. Deciding what these restraints should be is a chore more appropriate for leaders than for engineers — it entails human judgement, policy, and ethics. The remaining pitfalls described by Kearns and Roth are caused not so much by algorithms as by humans trying to optimize algorithmic outcomes for themselves. For instance, people who live in residential neighborhoods that offer alternative routes to traffic-jammed freeways have been known to report nonexistent accidents to the navigation app Waze to induce it to steer drivers away from them. The solution set to these pitfalls includes teaching algorithms to anticipate and adjust for efforts to game them, using concepts such as simulated self-play. Gerald Tesauro of IBM Research first applied this idea successfully in 1992, when he created a world-class backgammon program by inducing it to learn by playing itself. 


Breaking Through Three Common Engineering MythsMyth: Engineers Are Very Logical and Not Creative. This one seems to make sense – if engineers were creative, wouldn’t they have decided to be artists, writers, or some other "Fine Arts" profession? Wrong! The key word in being creative is right there – to create! Engineers create products, services, and processes that influence people every day. Whether your work goes into consumer applications, devices, or machines, the end product of engineering work is used by other people. If engineers suppressed their creativity, they would miss out on a lot of insights into ways to solve problems than they otherwise would. Every day, engineers need to find new ways to think outside the box to tackle new challenges. They have the fabulous opportunity and responsibility of imagining ways in which the world could be different and then creating ways to make that happen. That is at the heart of what creativity is all about and it should be inspiring and exciting for engineers. For example, engineering innovations have been a big part of healthcare improvement over the years. 


AI could help with the next pandemic—but not with this one


Darren Schulte, an MD and CEO of Apixio, which has built an AI to extract information from patients’ records, thinks that medical records from across the US should be opened up for data analysis. This could allow an AI to automatically identify individuals who are most at risk from Covid-19 because of an underlying condition. Resources could then be focused on those people who need them most. The technology to read patient records and extract life-saving information exists, says Schulte. The problem is that these records are split across multiple databases and managed by different health services, which makes them harder to analyze. “I’d like to drop my AI into this big ocean of data,” he says. “But our data sits in small lakes, not a big ocean.” Health data should also be shared between countries, says Inam: “Viruses don’t operate within the confines of geopolitical boundaries.” He thinks countries should be forced by international agreement to release real-time data on diagnoses and hospital admissions, which could then be fed into global-scale machine-learning models of a pandemic.


Sumo Logic: cultural process shifts should precede platform lifts

For IT teams at new companies, this approach often involves making use of cloud services and systems to quickly construct what would have previously needed armies of consultants and huge amounts of hardware to deliver. What an opportunity to make the most of modern IT. For companies with existing investments, the sheet of paper is not so blank, but it still probably has plenty of scope for development. Digital transformation projects may be more complex due to the mix of old and new technology, but they should still provide great opportunities to modernise. ... The issue here is that these individual technology elements – cloud services offering more power, applications and information sources proffering more data, analytics tools providing the ability to work with data in real time – is that they lack context. Each of these projects might be a good opportunity to modernise, but they also have to join up with each other and with how people actually work in order to succeed. To achieve this, we have to look at the processes involved, the business objectives that we are looking to meet, and what intelligence gaps exist.



The report lists two major ransomware attacks that had dramatic effects on production supply chains in 2019.  The March 19 cyberattack on aluminum producer Norsk Hydro involved LockerGoga, a previously seen ransomware tool that "halted operations at the company's corporate headquarters in Norway and impeded productivity in its extruded solutions division throughout Europe and North America."  "Analysts believe the attack marks a worrying trend, due to its international scope and direct impact on production and logistics assets," the report added. On June 7, there was another ransomware attack on Belgian aerospace supplier ASCO Industries that forced the company to shut down production lines at four different factories across North America and Europe.  The attack was so damaging that the company furloughed nearly 1,000 employees temporarily and was out of operation for more than a month. "Greater connectivity and digitalization are making manufacturing and supply chain operations more vulnerable to cyber-threats. Factories and logistics facilities can be caught in the crossfire of large-scale cyberattacks by criminals or state-sponsored groups, but they are also being targeted directly by a variety of actors," the report said.


Raspberry Pi is your new private cloud

Raspberry Pi is your new private cloud
If you’ve not guessed by now, this makes running a Raspberry Pi-based Kubernetes cluster feasible since this Kubernetes distribution is really purpose-built for the Pi, of course with some limitations. ... This enabling technology lets cloud architects place Kubernetes clusters running containers outside of the centralized public cloud on small computers that will work closer to the sources of the data. The clusters are still tightly coordinated, perhaps even spreading an application between a public cloud platform and hundreds or even thousands of Raspberry Pis running k3s. Clearly it’s a type of edge computing with thousands of use cases. What strikes me about this pattern of architecture is that cheap, edge-based devices are acting like lightweight private clouds. They provision resources as needed and use a preferred platform such as containers and Kubernetes. Of course, they have an upper limit of scalability. This is what hybrid cloud was supposed to be, but never was. Pairing a private and public cloud meant…well…you had to use a private cloud. Purpose-built private clouds fell way behind in features and functionality, so much so that enterprises are moving away from them in 2020, no matter if they are already deployed or not yet.



Quote for the day:

"It is time for a new generation of leadership to cope with new problems and new opportunities for there is a new world to be won." -- John E Kennedy

Daily Tech Digest - March 12, 2020

Stop saying employees are the weakest link in cybersecurity


Firstly, framing the conversation like this doesn’t get us anywhere. Are football players to blame when they lose a match? Well, in a way, but the players are also to ‘blame’ when they win. And even when they do lose, telling them that they’re the problem is only going to demoralize and lead to further losses. Secondly, if blame has to lie somewhere, it surely lies with the security awareness programs rather than the employees who rely on those programs to better protect themselves. The reason that human-error breaches continue to occur at such at rate is that – and let’s be honest here – security awareness training in its current form just doesn’t work. Training doesn’t work because, in most cases, it focuses solely on awareness. Awareness is all well and good, but increased awareness by itself is not what necessarily matters. Just because people are ‘aware’ of cyber risks doesn’t mean that, in the real world, they will behave in a more secure way.



Temperature check outside office building in Singapore.
The top priority for CIOs is to ensure companies can manage the huge and sudden spike in demand for remote-working capacity caused by the closure of offices and other facilities. “This required my team to make some adjustments to the way we supply necessary equipment and remote access to [our] networks,” explains Kota, who says Autodesk has created a self-service toolkit so that many more employees can quickly set themselves up to work remotely if the need arises. Nikolaj Sjoqvist, the chief digital officer of Waste Management, a $47 billion waste-management and environmental-services giant, says it has increased the number of licenses available for virtual private networks and is scaling up its networking capacity to support more remote work. Sjoqvist is also tapping cloud-based applications and services that can quickly be spun up to support the effort. For employees used to working on desktops, his team is leveraging virtual desktop infrastructure technology to give them access to applications on their personal computers.


What is data governance? A best practices framework for managing data assets


Data governance is just one part of the overall discipline of data management, though an important one. Whereas data governance is about the roles, responsibilities, and processes for ensuring accountability for and ownership of data assets, DAMA defines data management as "an overarching term that describes the processes used to plan, specify, enable, create, acquire, maintain, use, archive, retrieve, control, and purge data. While data management has become a common term for the discipline, it is sometimes referred to as data resource management or enterprise information management. Gartner describes EIM as "an integrative discipline for structuring, describing, and governing information assets across organizational and technical boundaries to improve efficiency, promote transparency, and enable business insight." ... BARC warns that data governance is not a "big bang initiative." As a highly complex, ongoing program, data governance runs the risk of participants losing trust and interest over time. To counter that, BARC recommends starting with a manageable or application-specific prototype project and then expanding data governance across the company based on lessons learned.


How Cloud, Security and Big Data Are Forcing CIOs to Evolve

Image: chrupka - stockadobe.com
Businesses are far more concerned with security, data privacy and compliance than ever before, and rightfully so. The average cost of a data breach today is $3.9 million, according to IBM. As the ever-growing wave of security and privacy incidents continues, we’ve seen legislative reactions such as GDPR or the California Consumer Privacy Act (CCPA) emerge. A decade ago, CIOs would typically manage all aspects of data security and privacy based on the advice of a dedicated information security specialist. The sheer level of regulatory, financial and reputational damage at stake has shifted those responsibilities to chief information security officers (CISO). Prominent board advisers, CISOs are responsible for mitigating security and privacy risks, maintaining compliance, and preventing incidents from impacting the business.  In the past, CIOs would typically be responsible for collecting, organizing and retroactively reporting on company data. Now, we view data as a business enabler that can highlight meaningful trends, provide predictive models and help maximize efficiency and profitability.


3 important trends in AI/ML you might be missing

outline of a human head with a chip for a brain with code for an ML model in the background
Gone are the days when on-premises versus cloud was a hot topic of debate for enterprises. Today, even conservative organizations are talking cloud and open source. No wonder cloud platforms are revamping their offerings to include AI/ML services. With ML solutions becoming more demanding in nature, the number of CPUs and RAM are no longer the only way to speed up or scale. More algorithms are being optimized for specific hardware than ever before – be it GPUs, TPUs, or “Wafer Scale Engines.” This shift towards more specialized hardware to solve AI/ML problems will accelerate. Organizations will limit their use of CPUs – to solve only the most basic problems. The risk of being obsolete will render generic compute infrastructure for ML/AI unviable. That’s reason enough for organizations to switch to cloud platforms. The increase in specialized chips and hardware will also lead to incremental algorithm improvements leveraging the hardware. While new hardware/chips may allow use of AI/ML solutions that were earlier considered slow/impossible, a lot of the open-source tooling that currently powers the generic hardware needs to be rewritten to benefit from the newer chips.


The PSD2 deadline: 8 things businesses needs to know

The PSD2 deadline: 8 things businesses needs to know image
For many commentators, the security implications of opening up account data is a top concern, but open banking poses many more challenges than this. A detailed inspection of the small print of both PSD2 and the FCA’s new guidelines for payment service providers shows that the legislation has repercussions far beyond security, that are not so well understood by many in the financial services sector. The complexities are so vast that compliance officers may even be scratching their heads in bewilderment. Here’s a few things you need to be aware of. There are now two new classes of payment service providers under PSD2. In addition to standard banks and building societies, PSD2 recognises account information service providers (AISPs) and payment initiation service providers (PISPs). The latter offer services such as bill payment and peer-to-peer transfers, by initiating “a payment from the user account to the merchant account by creating a software bridge”. The former, meanwhile, provide aggregated bank account information and analysis services. PSD2 applies to non-EU transactions where one leg is carried out by a PSP outside Europe, in addition to those taking place on EU soil.


Is your board risk-ready?


As expectations and pressures evolve, corporate directors have been taking action: adding risk committees, ensuring there is a critical mass of risk expertise on the board, measuring how much attention they pay to risks, and understanding how cultural dynamics affect risk decisions. A recent Spencer Stuart study found that 12 percent of S&P 500 companies had risk committees in 2019 — a small number, but up from 9 percent in 2014. Finance and utility companies were by far the most likely to have risk committees, in no small part for regulatory reasons. But the vast majority — more than 95 percent — of S&P 500 companies assess the performance of their board of directors annually, as do 80 percent of companies in the Russell 3000, according to a 2019 report (pdf) by the Conference Board and data-mining firm ESGAUGE. There is evidence that boards are reacting to assessments. In PwC’s 2019 Annual Corporate Directors Survey, for example, an impressive 72 percent of directors said their boards made changes in response to the last board performance assessment — up from 49 percent just three years earlier.


Hackers are working harder to make phishing and malware look legitimate


The report found that hackers are no longer using fake invoices to trick businesspeople. Now they are pretending to be company employees asking partners to take action. In December, cybercriminals compromised the account of an employee at a Chinese venture capital firm. They spoofed the domain of an Israeli startup the Chinese firm had been working with and managed to steal $1 million in funding meant for the Israeli company. Trend Micro shared an example of a BEC email caught by the Cloud App Security platform. The email supposedly from the CEO included phrases like "No one else except us must be informed at this time," and "First, provide me immediately the available cashflow of our bank account," and "As soon as I receive those information, I will share with you further instructions." Bad actors also are using new credential phishing techniques, including malicious voice mails and shared files. One phishing campaign in July 2019 used fake OneNote Online pages hosted on a SharePoint subdomain that linked to a fake Microsoft login page.


Linux Foundation open sources disaster-relief IoT firmware: Project OWL


Project OWL (Organization, Whereabouts, and Logistics) creates a mesh network of Internet of Things (IoT) devices called DuckLinks. These Wi-Fi-enabled devices can be deployed or activated in disaster areas to quickly re-establish connectivity and improve communication between first responders and civilians in need. In OWL, a central portal connects to solar- and battery-powered, water-resistant DuckLinks. These create a Local Area Network (LAN). In turn, these power up a Wi-Fi captive portal using low-frequency Long-range Radio (LoRa) for Internet connectivity. LoRA has a greater range, about 10km, than cellular networks. LoRa also avoids the danger of having its bandwidth throttled by cellular carriers. That, by the way, actually happened in 2018 in Northern California's Mendocino Complex Fire when Verizon slowed the first responders' internet.  DuckLinks then provides an emergency mesh network to all Wi-Fi enabled devices in range. This can be used both by people needing help and first responders trying to get a grip on the situation with data analytics. Armed with this information, they can then formulate an action plan.


Has an AI Cyber Attack Happened Yet?


One of the biggest ways in which we can see AI-assisted cyber attacks affecting our daily lives is through Twitter. We’ve all heard one political party or another accusing the other of using "bots" to misrepresent arguments or make it seem like certain factions had more followers than they actually did. Bots by themselves aren’t a huge deal, and lots of companies and services use bots to drive customer engagement and funnel people through different areas of the website. We’ve all seen the bot-powered chat boxes on sites where you might have a question, like the homepage of a college. But the real issue with bots is that they are becoming more sophisticated. In an ironic twist to the Turing test, it’s becoming increasingly difficult for people to tell bots apart from real people, even though machines once almost universally failed the exam. Google has recently provided higher metrics for AI-generated audio and video, demonstrating this trend. These bots can pretty easily be used for misinformation, like when users marshal them to flood a Twitter thread with false posters to influence an argument.



Quote for the day:


"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson


Daily Tech Digest - March 11, 2020

Open-source options offer increased SOC tool interoperability

interoperable gears / integrated tools / interoperability
"What we're trying to do as an industry, if we can align around a common data model and a common set of APIs, then that problem [a lack of interoperable security tools] becomes a much smaller problem than it is today," Chris Smith, senior sales engineer at McAfee, tells CSO. STIX (Structured Threat Information eXpression), contributed by IBM, is useful "if you're threat hunting and you want to query all your other tools for evidence of a certain artifact use STIXShifter to ask that question in a vendor-neutral platform agnostic language," the GitHub repo said. "STIX Shifter would be the technology that enables a company to search for an indicator of compromise across multiple tools, data repositories," Jason Keirstead, chief architect, IBM Security Threat Management, tells CSO. "If that search turns up a compromised device, OpenDXL Ontology would be the mechanism that would be used to issue alerts/notifications across other tools in order to begin remediation."



Enterprises roll out private 5G while standards, devices, coverage evolve

5G mobile wireless network
Outside of private deployments, 5G coverage remains an obstacle. All the major carriers, including AT&T, Verizon, Sprint, and T-Mobile, are promising 5G connectivity, but in practice it's limited to a few areas in the biggest cities. Consumers don't have 5G-capable phones yet, so the carriers' 5G promises are little more than marketing hype for the time being. Gartner, for example, places 5G at the "peak of inflated expectations" in its most recent hype cycle report and predicts that it will take two to five years before 5G reaches what the analyst firm calls the "plateau of productivity," when mainstream adoption starts to take off. Until that happens, many enterprises are circumventing the lack of coverage by deploying private 5G in factories, college campuses, hospitals, office buildings, or other contained environments – just as the VA Palo Alto hospital did. "We believe that enterprise deployments have the potential to be the most significant and leading set of use cases for 5G," says Dan Hays, principal and head of US corporate strategy practice at PricewaterhouseCoopers.


Details about new SMB wormable bug leak in Microsoft Patch Tuesday snafu

microsoft windows security patch tuesday
According to Fortinet, the bug was described as "a Buffer Overflow Vulnerability in Microsoft SMB Servers" and received a maximum severity rating. "The vulnerability is due to an error when the vulnerable software handles a maliciously crafted compressed data packet," Fortinet said. "A remote, unauthenticated attacker can exploit this to execute arbitrary code within the context of the application." A similar description was also posted -- and later removed -- in a Cisco Talos blog post. The company said that "the exploitation of this vulnerability opens systems up to a 'wormable' attack, which means it would be easy to move from victim to victim." ... However, there is currently no danger to organizations worldwide. Only details about the bug leaked online, not actual exploit code, as it did in 2017. Although today's leak alerted some bad actors about a major bug's presence in SMBv3, exploitation attempts aren't expected to start anytime soon. Furthermore, there are also other positives. For example, this new "wormable SMB bug" only impacts SMBv3, the latest version of the protocol, included only with recent versions of Windows.


Dump your passwords, improve your security -- really


Your first encounter with FIDO likely won't look much different than two-factor authentication. You'll first type a conventional password, then plug in or wirelessly connect a FIDO hardware security key. The process still uses passwords, but it's more secure than passwords alone or passwords bolstered by codes sent by SMS or retrieved from authenticators like Google Authenticator. This approach -- password plus security key -- is how you can use FIDO today on Google, Dropbox, Facebook, Twitter and Microsoft services like Outlook.com and eventually Windows. "Hardware security keys are very, very secure," said Diya Jolly, chief product officer of authentication service company Okta. That's why congressional campaigns, the Canadian government's computing services division and all Google employees use them. Consumer services today often require you to plug in the keys only when logging in for the first time on a new PC or phone, or when you're taking a particularly sensitive action like transferring money out of your bank account or changing your password. Of course, a security key can be a hassle if you don't have it readily available when you need it.


What is LLVM? The power behind Swift, Rust, Clang, and more

What is LLVM? The power behind Swift, Rust, Clang, and more
At its heart, LLVM is a library for programmatically creating machine-native code. A developer uses the API to generate instructions in a format called an intermediate representation, or IR. LLVM can then compile the IR into a standalone binary or perform a JIT (just-in-time) compilation on the code to run in the context of another program, such as an interpreter or runtime for the language. LLVM’s APIs provide primitives for developing many common structures and patterns found in programming languages. For example, almost every language has the concept of a function and of a global variable, and many have coroutines and C foreign-function interfaces. LLVM has functions and global variables as standard elements in its IR, and has metaphors for creating coroutines and interfacing with C libraries. Instead of spending time and energy reinventing those particular wheels, you can just use LLVM’s implementations and focus on the parts of your language that need the attention. ... LLVM’s architecture-neutral design makes it easier to support hardware of all kinds, present and future. For instance, IBM recently contributed code to support its z/OS, Linux on Power, and AIX architectures for LLVM’s C, C++, and Fortran projects.


Accelerating ML Inference on Raspberry Pi With PyArmNN

Arm NN is an inference engine for CPUs, GPUs, and NPUs. It executes ML models on-device in order to make predictions based on input data. Arm NN enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, TensorFlow, ONNX, and Caffe, allowing them to run efficiently and without modification across Arm Cortex-A CPUs, Arm Mali GPUs, and Arm Ethos NPUs. PyArmNN is a newly developed Python extension for Arm NN SDK. In this tutorial, we are going to use PyArmNN APIs to run a fire detection image classification model fire_detection.tflite and compare the inference performance with TensorFlow Lite on a Raspberry Pi.  Arm NN provides TFLite parser armnnTfLiteParser, which is a library for loading neural networks defined by TensorFlow Lite FlatBuffers files into the Arm NN runtime. We are going to use the TFLite parser to parse our fire detection model for “Fire” vs. “Non-Fire” image classification.


Instant Low Code Database Web App - ASP.NET Core 3.1 Single Page Application(SPA)


A single-page application (SPA) is defined as a web application that fits on a single web page with the goal of providing a more pleasant user experience similar to a desktop application. It can be used to create a fully blown business web application linked to a database or quickly create a web application that can traverse, search & report on a large database. The following sample application code is an alternative to using libraries such as AngularJS, React, Vue, etc. Only jQuery and bootstrap are used in conjunction with vanilla JavaScript, HTML and CSS. A very simple approach is used in overlaying div tags and Ajax calls, to read and update the database, without any Postback. The Grid and Detail forms included in this application also contain simple CSS, to make them automatically resize to any mobile device, down to iPhone, etc. Using horizontal and vertical scrolling or swiping allows the user to quickly read all data columns and rows in a Grid. Can redo Parent, Child and Grandchild CRUD grids, over and over, within seconds.


What's the difference between RPA and IPA?


IPA development and implementations are significantly more complex. The technology requires data extraction and classification, machine learning and AI to foster decision-making. Businesses using IPA will need experts on hand who have an in-depth understanding of an evergrowing set of tools and capabilities in the space. Agarwal said technical skill requirements for users are key distinctions IT executives should be aware of upfront. The technical skill required for RPA ranges from basic to mature, whereas the technical skill required for IPA ranges from mature to advanced. RPA, not surprisingly, has considerably more traction as a result of this ease of use. "There are more processes being automated with RPA than IPA," he said. Process efficiencies associated with RPA, however, are not as high as the potential efficiencies realized by IPA. Agarwal said in RPA deployments, humans continue to play a significant role in data extraction and decision-making alongside the rules-based processing handled by RPA tools. IPA, in contrast, promises greater value in reducing manual labor costs, because it automates much of the human decision-making.


Enterprises being won over by speed, effectiveness of network automation

gears / build management + automation / circuits
It's a burgeoning field: MarketsandMarkets Research reports that the global network automation market is on track to grow from $2.3 billion in 2017 to an estimated $16.9 billion by 2022. "It’s a really exciting topic in the networking industry right now because the scale and complexity of networks is really greater than it ever was before," says Brandon Butler, senior research analyst covering enterprise networks at IDC, a Framingham, Mass.-based industry analyst firm. "It's a revolution we're still in the early days of. There are more mobile workers out there, accessing high-bandwidth company apps from more diverse places. By 2025, there are going to be 41.6 billion connected IoT devices that enterprises are getting data and insights from. If your network is down, it touches everything in the company. Relying on manual, ad-hoc management isn't efficient, scalable or secure." And while it's an exciting market, it really is in its infancy, according to Andre Kindness, principal analyst at Forrester, a Cambridge, Mass.-based research firm. He notes that enterprises might be automating firewall configurations or the monitoring of their switches and traffic.


UK government survives rebellion on ‘high-risk’ comms tech supplier strategy

Though relieved, the UK’s comms industry warned that it would still take a huge hit from the decision. In January 2020, EE network owner BT warned abiding by the UK government’s decision to restrict access to kit from suppliers such as Huawei could have a potential impact of around £500m, while in February 2020 Vodafone calculated that removing Huawei equipment that exists already in its core networks across Europe would cost as much as €200m over the next five years. Such recommendations were never accepted by a core group of backbench MPs among the UK’s ruling Conservative Party, and former leader Ian Duncan Smith led a rebellion against the Telecommunications Infrastructure Bill, proposing an amendment that would lead to an outright ban on Huawei technology, which he said posed a real and direct threat to the UK’s national security. Duncan Smith’s amendment would have seen firms classified as high-risk by the National Cyber Security Centre banned entirely from the UK’s 5G project by 31 December 2022.



Quote for the day:


"Leadership should be born out of the understanding of the needs of those who would be affected by it." -- Marian Anderson


Daily Tech Digest - March 10,2020

How can companies thrive under CCPA regulation?

How can companies thrive under CCPA regulation? image
One major challenge that Manley says companies deal with when it comes to data management under CCPA is dealing with consumer data that’s located across a number of devices and software infrastructures. He explained: “I think the biggest challenge we see a lot of people having is that they often don’t understand how many places are holding customer data. “They were thinking very much about the data that they had on their premises, maybe things that are in file servers, databases, corporate laptops, that sort of thing, and it takes a while to then realise that they’ve got a number of SaaS applications, whether it’s Salesforce, Slack or Office 365. ... According to Manley, the companies that manage to succeed while staying within CCPA boundaries use the regulation as an opportunity to reflect on their operations. “Regulations like CCPA are a good baseline for what your company should be doing anyway,” he said. “For a lot of the better organisations, we see them saying that the goal isn’t just to hit the baseline, but it’s to use this as a starting point for discussion about what we want to be as a business.



Impactful, but Overhyped AI

IoT AI
Many companies struggle with how to successfully integrate AI into their businesses. Lux Research released a report called “Artificial Intelligence: A Framework to Identify Challenges and Guide Successful Outcomes” that analyzes the market, outlines several challenges companies face in integrating AI, and hones in on several factors businesses should consider before investing in AI. The four factors the research firm suggests to help businesses make wise AI investments and decisions include: clearly understanding the outcomes implementing AI will provide for their businesses; focusing on an AI product’s capabilities instead of flashy marketing; knowing when the technology is mature enough to mitigate risk; and identifying practical challenges to both implementation and maintenance of the technology once it is in place. There’s no doubt that AI technologies can be impactful in helping companies achieve digital transformation, but there is also a lot of hype that is not necessarily helping the space and the players within it.


Multiple nation-state groups are hacking Microsoft Exchange servers

Microsoft Exchange
These state-sponsored hacking groups are exploiting a vulnerability in Microsoft Exchange email servers that Microsoft patched last month, in the February 2020 Patch Tuesday.The vulnerability is tracked under the identifier of CVE-2020-0688. ... This Exchange vulnerability is not, however, straightforward to exploit. Security experts don't see this bug being abused by script kiddies (a term used to describe low-level, unskilled hackers). To exploit the CVE-2020-0688 Exchange bug, hackers need the credentials for an email account on the Exchange server -- something that script kiddies don't usually have. The CVE-2020-0688 security flaw is a so-called post-authentication bug. Hackers first need to log in and then run the malicious payload that hijacks the victim's email server. But while this limitation will keep script kiddies away, it will not stop APTs and ransomware gangs, experts said. APTs and ransomware gangs often spend most of their time launching phishing campaigns, following which they obtain email credentials for a company's employees.


3 cloud architecture problems that need solutions


Many push as much as they can to the edge, but realize that you’re moving away from a centralized system (the public cloud), to many decentralized systems (the edge devices or servers). You need to understand that you must maintain these edge systems, and they are much more difficult to monitor, govern, secure, update, and configure. Multiply that effort by hundreds of edge computing devices and you've got an operational nightmare. Second, what to containerize? Many enterprises say containers are their strategy and not just an enabling technology. This almost religious belief in the power of containers has pushed many an application to the cloud in containers, but that’s really not how business should be moving there. The issue is that there are no hard and fast rules as to what can—and should—exist in a container. Legacy applications that will take a great deal of effort to refactor (rewrite) for containers are not likely candidates; however, in many instances, the cloud migration team attempts to move them first. This means that enterprises will fail to find value in containers for some of their applications that move to the cloud.


How to break down data silos: 4 obstacles and solutions

Silos
With the growth of shadow IT, vendor software and databases can come through virtually any departmental door. Systems from different vendors that departments independently buy don't necessarily interact well with each other. When this occurs, systemic data silos can arise because of cross-system and data integration failures. The best way to address this issue is to require interoperability and a full set of application programming interfaces (APIs) in the requests for proposal (RFP) that IT and individual business departments issue to vendors. One way to assure that system and data interoperability is a front-page requirement on RFPs is for IT to create a standard RFP that is required by purchasing or whichever department authorizes tech purchases. This standardized form can be used by IT and end-user departments. Most systems and databases sold by vendors have some type of APIs for data integration; however, totally seamless integration and the ability to easily aggregate data from disparate systems can never be assumed.


How CIOs can limit the business disruption of the coronavirus — Gartner

How CIOs can limit the business disruption of the coronavirus — Gartner image
With various quarantine measures and travel restrictions undertaken by organisations, cities and countries, uncertainties and disruptions are beginning to have more of an impact on businesses and their workforces. This increases the chance that business operations are either being suspended or run in a limited capacity. In response to this, Gartner are promoting the use of AI to automate some tasks particularly basic customer service protocols and candidate screenings. In its report, Gartner also recommends that in organisations where remote working capabilities have not yet been established, CIOs need to work out interim solutions, including using instant messaging for general communication, file sharing/meeting solutions, and access to enterprise applications such as enterprise resource planning (ERP) and customer relationship management (CRM). ... If it isn’t possible for organisations to meet their clients face to face, Gartner recommends using digital channels such as video calls and live streaming solutions that can serve various customer engagement and selling scenarios.


What Does A Typical Day Of A Data Scientist Look Like?

A day of a data scientist
Like every other professional, the day of a data scientist will be dotted with emails to answer and meetings to attend. But this is where the similarities end. Unlike in most jobs, each day throws up new challenges and unique problems for a data scientist. This comes in the form of varied projects, and that in turn changes with the industry they operate in. But despite the flux, what ties together each workday for them collectively are data-related tasks. Depending on your profile, you will either be – broadly speaking — pulling data, shaping it, merging it, or analysing it — all with the end goal of solving problems for businesses. This is accomplished by using a wide variety of tools that look for patterns or trends within a given data set, and trying to simplify data problems. ... As emphasised in the first point, the primary task of a data scientist is to be problem-solvers, and that cannot be achieved in silos. A typical day would involve engaging with stakeholders at multiple levels to determine the questions that need pointed answers. Not just that, it is their job to come up with different approaches to solve these problems.


Introducing Alpine.js: A Tiny JavaScript Framework

Ever built a website and reached for jQuery, Bootstrap, Vue.js or React to acheive some basic user interaction? Alpine.js is a fraction of the size of these frameworks because it involves no build steps and provides all of the tools you need to build a basic user interface. Like most developers, I have a bad tendency to over-complicate my workflow, especially if there’s some new hotness on the horizon. Why use CSS when you can use CSS-in-JS? Why use Grunt when you can use Gulp? Why use Gulp when you can use Webpack? Why use a traditional CMS when you can go headless? Every so often though, the new-hotness makes life simpler. Recently, the rise of utility based tools like Tailwind CSS have done this for CSS, and now Alpine.js promises something similar for JavaScript. In this article, we’re going to take a closer look at Alpine.js and how it can replace JQuery or larger JavaScript libraries to build interactive websites. If you regularly build sites that require a sprinkling on Javascript to alter the UI based on some user interaction, then this article is for you.


Job Trends For Data Scientists In The Next 5 Years

Data scientist
A trend that has emerged in recent times is that companies which earlier identified themselves as ‘non-tech’, are beginning to position themselves as tech companies, and this is likely to continue. A case in examples is banks. For instance, the term ‘analyst’ used in the context of this industry, might now be called a ‘data scientist’, as long as they are seeking to monetise the company’s data assets. One of the main drivers for this trend is the copious amounts of data available today – and this has been increasing exponentially. What is more, fuelled by the rise of (Internet of Things) IoT and social media, this growth is not expected to slow down anytime soon. The IoT market in India alone is reportedly likely to reach a whopping 2 billion connections by 2022. This is buttressed by the fact that not only are more devices coming online but with greater improvements in hardware, the type of data delivered will be more diverse. The same goes for social media. According to Hootsuite, the number of social media users worldwide in 2019 rose up to 3.484 billion — recording an increase of 9% y-o-y.


Huawei P40 Pro expected to have 7 cameras, 10x optical zoom, and 5G support


According to known Apple leaker Ming Chi Kuo, a 10x optical zoom camera could be included as one of the sensors in the P40 Pros camera system, making it the world's first phone to achieve such a feat. The Mate 30 Pro featured a quad-camera set-up, and included a 50x digital zoom and a 5x optical zoom, which catapulted it into the mobile hall of fame.  Optical zoom is achieved by switching from a wide-angle camera to a telephoto camera. The magnification number is a reflection of the difference of those two lens lengths. Using the telephoto camera without "pinching in" results in a higher-quality image instead of using digital zoom which is what happens when you pinch the screen of your phone while using the main camera --- or when you try to zoom in beyond the telephoto camera's capabilities. According to GizChina, the P40 Pro's rear camera will come with a 52-megapixel Sony IMX700 sensor, which is 10 megapixels higher than P30 Pro's rear camera. The 52-megapixel sensor is significantly lower in terms of resolution than Samsung's Galaxy S20 Ultra 108-megapixel sensor, but reports suggest this new sensor can bring bigger pixels and better low-light image quality.



Quote for the day:


"The captain of a ship can run a great ship, but he can't do anything about the tides." -- Matthew Norman


Daily Tech Digest - March 09, 2020

Can Continuous Intelligence and AI Predict the Spread of Contagious Diseases?


Past efforts to model the spread of contagious diseases may have made false assumptions about the data they relied on? Does the fact that many people in one geographic region search for the name of an emerging contagious disease mean the disease is present and growing? Perhaps, perhaps not. The danger is relying on coincidences and not linking cause to effect. Did past and current efforts have all the data they needed? One issue with forecasting the spread of a disease is that models might not have accurate data. The issue is especially relevant at the onset of new diseases. It is quite easy to blur flu-like symptoms in patients. Doctors may not know the symptoms of a disease at its onset, or they may make inaccurate diagnoses. Are the models based on the right science? At the early stage of investigating a newly found disease, even basic information, like how a disease spreads, is unknown. Is it airborne? Does it spread via exposure to blood or other bodily fluids? What’s the incubation period? Such mechanisms need to be nailed down before predictions can be made.



Out at Sea, With No Way to Navigate: Admiral James Stavridis Talks Cybersecurity

We're still figuring out how this is going to work. To shift metaphors to the oceans, it's as though we're out at sea, we're in a bunch of boats, but we haven't really put in place buoys and navigational aids, and we haven't really defined who's going to protect us. So if if I'm a commercial ship at sea, I know the US Navy is going to come and defend me if I'm an American ship and I'm under attack. And in fact, we actively discourage merchant ships from mounting their own defenses. The defense requirements, I think, ought to be vested in the state. But in the world of cyber, realistically, if you're a commercial entity, particularly a target-rich kind of environment like financials or critical infrastructure, say electric grid, the government so far has not really stepped up to that task of broadly protecting you. Yeah, you can get some help from the NSA and some help from the FBI and some help from the CIA. But broadly speaking, you are going to have to have some mechanisms, at least on the detection and on the defensive side.


Containers march into the mainstream

Containers march into the mainstream
A decade ago, Solomon Hykes’ invention of Docker containers had an analogous effect: With a dab of packaging, any Linux app could plug into any Docker container on any Linux OS, no fussy installation required. Better yet, multiple containerized apps could plug into a single instance of the OS, with each app safely isolated from the other, talking only to the OS through the Docker API. That shared model yielded a much lighter weight stack than the VM (virtual machine), the conventional vehicle for deploying and scaling applications in cloudlike fashion across physical computers. So lightweight and portable, in fact, that developers could work on multiple containerized apps on a laptop and upload them to the platform of their choice for testing and deployment. Plus, containerized apps start in the blink of an eye, as opposed to VMs, which typically take the better part of a minute to boot. To grasp the real impact of containers, though, you need to understand the microservices model of application architecture. Many applications benefit from being broken down into small, single-purpose services that communicate with each other through APIs, so that each microservice can be updated or scaled independently.


Democratizing data, thinking backwards and setting North Star goals

Essentially, database is a fairly old technology, but it has always been about three things. One thing is value. How do you get the best out of your data, which is, what are the features that you provide, the power of querying the data, of updating it, of correlating it, and doing things with the data? The second thing has been security. How do you make sure that the data stays under your control, that you own it and determine what happens with the data? And the third is, I would call it cost or performance, is making sure that you don’t overpay for the data, right? That it’s kind of cheap to, or kind of gets more and more affordable, to do what you want to do with your data and control it. ... The best way to process data is if it’s really structured and you know exactly what it is, right? And you have a schema, essentially. And I spent a lot of time working on semi-structured data, which has some structure that you kind of extract and that is kind of like getting good value out of all data, not just your structured data like your bank accounts, but also your email, the books you write, the word documents you write, getting some value out of that.


Artificial intelligence and machine learning an essential part of cybersecurity


World Wide Technology also plans to use AI and ML this year as part of its cybersecurity plans, according to chief technology advisor Rick Pina. "In today's digital age, the security of data, applications, and processes is of the utmost importance; and AI and ML now play an integral part in this cybersecurity process. AI and ML have brought enticing new prospects for speed, accuracy, and connectivity to the public and private sectors, allowing government agencies and corporate organizations to make great strides in governed self-service access, alongside data security and reliability," Pina said. ... Michael Hanken, vice president of IT at Multiquip, said he isn't planning to use AI and ML yet, but he is researching its benefits and limits to see how it might work in conjunction with cybersecurity in the future. Dan Gallivan, director of IT for Payette, said, "AI and ML are not part of the official plan this year but I do feel they are in the not too distant future as we learn more about artificial intelligence and machine learning development capabilities and then experiment with them in cybersecurity."


7 Cloud Attack Techniques You Should Worry About

(Image: Adam121 - stock.adobe.com)
As organizations transition to cloud environments, so too do the cybercriminals targeting them. Learning the latest attack techniques can help businesses better prepare for future threats. "Any time you see technological change, I think you certainly see attackers flood to either attack that technological change or ride the wave of change," said Anthony Bettini, CTO of WhiteHat Security, in a panel at last week's RSA Conference. It can be overwhelming for security teams when organizations rush headfirst into the cloud without consulting them, putting data and processes at risk. Attackers are always looking for new ways to leverage the cloud. Consider the recently discovered "Cloud Snooper" attack, which uses a rootkit to bring malicious traffic through a victim's Amazon Web Services environment and on-prem firewalls before dropping a remote access Trojan onto cloud-based servers. As these continue to pop up, many criminals rely on tried-and-true methods, like brute-forcing credentials or accessing data stored in a misconfigured S3 bucket. There's a lot to keep up with, security pros say.


Robotic Process Automation Implementation Choices


The first step in implementing RPA is identifying tasks that lend themselves to automation. There are some common characteristics to look for even though RPA application areas cut across broad swaths of organizations. Specifically, IBM notes that an “RPA-ready” application is one that is: Simple, consistent, and repeatable; Repetitive low-skill tasks that create human issues such as high error rates and low worker morale; Existing or planned processes where stripping off routine tasks can free humans and deliver significant productivity, efficiency, or cost benefits; and Tasks that offer meaningful opportunities to improve customer and worker experiences by speeding up existing processes. Some tasks may meet many of these criteria but still not be suitable for RPA. For example, a task may meet every criterion, but if the task requires additional data capture capabilities or a redesign of the process, RPA may not be the right fit. RPA can be applied to a very broad range of tasks across most industries.


Android security warning: One billion devices no longer getting updates


All of the phones in the tests were infected successfully by Joker – also known as Bread – malware. Every single device tested was also infected with Bluefrag, a critical vulnerability that focuses on the Bluetooth component of Android. Which? said there should be greater transparency around how long updates for smart devices will be provided so that consumers can make informed buying decisions, and that customers should get better information about their options once security updates are no longer available. The watchdog also said that smartphone makers have questions to answer about the environmental impact of phones that can only be supported for three years or less. Google told ZDNet: "We're dedicated to improving security for Android devices every day. We provide security updates with bug fixes and other protections every month, and continually work with hardware and carrier partners to ensure that Android users have a fast, safe experience with their devices." When operating systems and security updates are delivered varies depending on the device, manufacturer and mobile operator. Because smartphone makers will tweak bits of the Android operating system, they often deploy patches and updates at a slower pace than Google does on its own devices, or not at all.


The Dark Side of Microservices

From a technical perspective, microservices are strictly more difficult than monoliths. However, from a human perspective, microservices can have an impact on the efficiency of a large organization. They allow different teams within a large company to deploy software independently. This means that teams can move quickly without waiting for the lowest common denominator to get their code QA’d and ready for release. It also means that there’s less coordination overhead between engineers/teams/divisions within a large software engineering organization. While microservices can make sense, the key point here is that they aren’t magic. Like nearly everything in computer science, there are tradeoffs — in this case, between technical complexity for organizational efficiency. A reasonable choice, but you better be sure you need that organizational efficiency, for the technical challenges to be worth it. Yes, of course, most clocks on earth aren’t moving anywhere near the speed of light. Furthermore, several modern distributed systems, rely on this fact by using extremely accurate atomic clocks to sidestep the consensus issue.


Essential things to know about container networking

IDG Tech Spotlight  >  Containers + Virtualization [ Network World / March 2020 ]
Choosing the right approach to container networking depends largely on application needs, deployment type, use of orchestrators and underlying OS type. "Most popular container technology today is based on Docker and Kubernetes, which have pluggable networking subsystems using drivers," explains John Morello, vice president of product management, container and serverless security at cybersecurity technology provider Palo Alto Networks. "Based on your networking and deployment type, you would choose the most applicable driver for your environment to handle container-to-container or container-to-host communications." "The network solution must be able to meet the needs of the enterprise, scaling to potentially large numbers of containers, as well as managing ephemeral containers," Letourneau explains. The process of defining initial requirements, determining the options that meet those requirements, and then implementing the solution can be as important choosing the right orchestration agent to provision and load balance the containers. "In today's world, going with a Kubernetes-based orchestrator is a pretty safe decision," Letourneau says.



Quote for the day:


"Leadership without mutual trust is a contradiction in terms." -- Warren Bennis