Daily Tech Digest - March 16, 2020

How Machine Learning, A.I. Might Change Education


One area in which A.I. intersects with student learning is in ethics. Some studies are already exploring the ethical issues of replacing teachers with bots. However, although bots can enhance education, they can’t replace teachers, according to Bernhardt L. Trout, professor of chemical engineering and director of Society, Engineering, and Ethics at the Massachusetts Institute of Technology. Trout argues that A.I. can enrich the learning of students as they master skills, languages and basic math, but it can’t help students learn creativity or critical thinking. “Bots will not be able to decide for us what is good, although they might be able to help us learn better the issues around the decision of what is good,” he said. “Bots are limited in making certain choices about education in ways that human beings are not limited, so this is where we get into the more ethical issues.” Trout sees bots teaching themes or the usage of certain words, for example, but they may be limited in helping students critique literature. He believes a bot is unable to teach the essential concepts needed to understand the work of philosophers such as Plato or Dante, or painters such as Michelangelo: “That’s where I think there is an intrinsic limitation.”



Rethinking change control in software engineering


When organizations mandate that their ops teams focus solely on stability, change control can quickly become change prevention, much to the chagrin of development teams that are mandated to continuously update and deliver new features. With DevOps now inverting the traditional IT delivery model, the question becomes: Can change control still work in the way it was intended? It's likely that small, software-focused organizations running in the cloud won't use the term change control. They may just execute deployments when it makes sense, especially if the team doesn't yet charge for their services, or they have a way to turn a new service on for only a limited number of users. On the other end, large organizations that still run COBOL tend to use monolithic ticketing systems to manage permissions and change approvals. However, most teams probably find themselves somewhere in the middle of these two extremes, leaving them in a place where they need to find a realistic balance between both the resiliency and flexibility of feature deployments.


What is the internet backbone and how it works

global network connections
Like any other network, the internet consists of access links that move traffic to high-bandwidth routers that move traffic from its source over the best available path toward its destination. This core is made up of individual high-speed fiber-optic networks that peer with each other to create the internet backbone. The individual core networks are privately owned by Tier 1 internet service providers (ISP), giant carriers whose networks are tied together. These providers include AT&T, CenturyLink, Cogent Communications, Deutsche Telekom, Global Telecom and Technology (GTT), NTT Communications, Sprint, Tata Communications, Telecom Italia Sparkle, Telia Carrier, and Verizon. By joining these long-haul networks together, Tier 1 ISPs create a single worldwide network that gives all of them access to the entire internet routing table so they can efficiently deliver traffic to its destination through a hierarchy of progressively more local ISPs. In addition to being physically connected, these backbone providers are held together by a shared network protocol, TCP/IP. They are actually two protocols, transport control protocol and internet protocol that set up connections between computers, insuring that the connections are reliable and formating messages into packets.


Working from home: Your common challenges and how to tackle them


Interruptions come from outside, like a knock at the door from a delivery driver asking you to take in a parcel for a neighbour. Other potential interruptions; family and pets and friends who fail to understand that just because you are at home, you are still working. Closed doors, do not disturb signs and noise-cancelling headphones all come in handy. More working from home tips here. Distractions are slightly different. These are mostly the result of being in a different environment to the one which you are used to, and that means habits are disrupted and priorities get muddled. In the office your priorities are (mostly) well defined – you're there to work. At home your priorities are different; having fun, cooking, eating, cleaning, watching TV – almost by definition everything not work related. Bringing work into the home, especially if it's for the first time, especially now, confuses all of this. It also makes you think you can combine the two, which is why you'll try to wash the dishes while on a conference call (and yes, everyone will know). Here the solution is around building a new work routine so that focusing is easier. That's why every set of remote working tips talks about getting up and getting dressed, and attempting to work regular hours.


Telehealth and Coronavirus: Privacy, Security Concerns
Keith Fricke, principal consultant at tw-Security, notes that it's critical for healthcare entities to take a number of critical security measures when using telemedicine applications. That includes ensuring the transmission of information over the internet is encrypted and making sure that the endpoints where telehealth transmissions begin and end are secured, he notes. "I don't think these risks are heightened by the coronavirus," he says. "However, a rush to establish new telehealth applications or a rush to expand existing ones to meet demands driven by COVID-19 can lead to overlooking important controls necessary to maintain security and privacy of information. "As with any technology deployment involving the storage, processing or transmission of PHI or other confidential information, it is important to implement telehealth services with the appropriate technical, physical and administrative controls." As the use of telemedicine expands in dealing with the outbreak, new risks will also evolve, Fricke adds.


When Will 100% Remote Be an Accepted Norm?

Picture yourself graduating from college in the 1980s or 1990s, ready to change the world with your college degree and your freshly polished programming skills. Depending on the year you started working in the industry, you might have to share a terminal to write the program code required to complete your job. The idea of having a computer at your desk wasn't a reality. For those starting a little later, you might have a computer at your desk, but it is merely a client to a host system housing your program logic and processing power. The system you programmed on was in near proximity to you. Later, that idea was broken into an application server and some type of data store or database. There wasn't a cloud option to host your system, but some did have custom connectivity to align data centers across private corporate networks. Remember, the internet wasn't a "thing" we could rely on, yet. There was a fleet of programmers who grew up in this reality. 


how sit and uat differ
With user acceptance testing, the development organization gives the build or release to the target consumer of the system, late in the software development lifecycle. Either the end users or that organization's product development team perform expected activities to confirm or accept the system as viable. The UAT phase is typically one of the final steps in an overall software development project. ... Testers who evaluate functionality as it's delivered are usually prepared to also check application functionality as a whole, integrated solution. SIT is often a more technical testing process than UAT. Testers design and execute SIT, as they've become familiar with the types of defects common in the application throughout the SDLC. The SIT phase precedes UAT. Because the technical expertise between users and testers varies significantly, the two demographics are likely to find vastly different defects between UAT and SIT. SIT often uncovers bugs unit tests didn't catch -- defects that rely on a workflow or interaction between two application components, such as a sequential order of operations.


How Red Hat tackles security


In the just-released Red Hat Product Security Report 2019, Red Hat said it's seeing more customers than ever trying to grapple with ever-mounting security issues by using third-party scanners. But, he said, "While scanning tools can provide a useful 'single pane of glass' view of vulnerabilities across an enterprise-wide environment, they generally do a poor job of articulating risks specific to a technology or implementation." So, Red Hat Engineering and Red Hat Product Security both explain exactly what's what with security issues and making Red Hat's "upstream packages enterprise-ready by regression testing, hardening, and tweaking the package to meet our customers' unique business demands and our release standards." To help improve this process, Red Hat made a fairly sizable change to Red Hat Enterprise Linux (RHEL) support life cycle. Because "RHEL is the foundation of all of our products and services, we felt it was important to expand the scope of what we supported."  So, RHEL now includes patches and fixes for Important-rated issues, which typically cover the largest share of issues. Previously, Red Hat was more selective about which Important-rated issues were addressed in RHEL's Extended Update Support.


Banks are adopting account aggregator framework on data

Banks are adopting account aggregator framework on data
Account aggregators are responsible for transferring, but not storing, client data. An AA ecosystem, as envisaged by the Reserve Bank of India (RBI), would be a platform for financial services companies to reach out to the consumer to seek consent before using their personal data to optimise their product offerings. "All the work is going in that direction. We are coordinating between the ecosystems. The scale-up will see a hockey stick effect. These banks and AA companies are part of the first wave. Many are waiting to join the second wave," said BG Mahesh, a cofounder of Sahamati, a non-profit collective of account aggregators. So far, Cams Finserv, FinSec AA Solutions and Cookiejar Technologies have received operating licences from the Reserve Bank of India. Kotak Mahindra Group said it was launching a pilot among 50,000 employees to test use cases for the AA framework in banking, broking, wealth management, and insurance. "As we speak we are launching a pilot with our employees before we launch it for our customers.


Making Your Code Faster by Taming Branches


Most software code contains conditional branches. In code, they appear in if-then-else clauses, loops, and switch-case constructs. When encountering a conditional branch, the processor checks a condition and may jump to a new code path, if the branch is taken, or continue with the following instructions. Unfortunately, the processor may only know whether a jump is necessary after executing all instructions before the jump. For better performance, modern processors predict the branch and execute the following instructions speculatively. It is a powerful optimization. There are some limitations to speculative execution, however. For example, the processor must discard the work done after the misprediction and start anew when the branch is mispredicted. Thankfully, processors are good at recognizing patterns and avoiding mispredictions, when possible. Nevertheless, some branches are intrinsically hard to predict and they may cause a performance bottleneck. Programmers can be misled into underestimating the cost of branch mispredictions when benchmarking code with synthetic data that is either too short or too predictable.



Quote for the day:


"You can't lead anyone else further than you have gone yourself." -- Gene Mauch


Daily Tech Digest - March 15, 2020

The rising threat of drones to cybersecurity: What you need to know

picture of a drone
While it may seem impossible for a drone to affect cybersecurity, there are several factors that make it entirely possible for drones to carry out many malicious cybercrimes. For instance, drones equipped with cameras have been associated with spying. In fact, there have been many arrests for drone spying — and that’s not all a drone can do. In addition to taking bird’s-eye pictures and video, drones can also be used to spy on networks, capture data and block communications, making them a huge threat to cybersecurity as a whole. The fact that drones carry this type of threat to cybersecurity is due to their vast capabilities. In addition to cameras, many drones come equipped with GPS, USB ports, and other means that can easily allow them to be hijacked. Hackers can use tools to easily tap into drones if the owner doesn’t install certain security measures. This leaves many commercial drones at risk of exploitation due to the fact that they communicate with their operators via WiFi and GPS, which often tend to be unencrypted. With all that a drone can do, it comes as no surprise that they pose such a risk to cybersecurity. In addition to the privacy issue and the fact that drones are vulnerable to hackers, previous incidents prove how risky the small aircrafts can be.



The report also highlighted that just 9% of security professionals are neurodivergent, although meaningful and reliable comparison of this measure against the wider industry is not yet possible – DCMS nevertheless said it found a concerning lack of awareness of neurodiversity in the sector. The research process highlighted a number of barriers and challenges to increasing the diversity of Britain’s cyber security workforce. DCMS said that while diversity was seen as more important, there remain pockets of scepticism, with some interviewees claiming the topic was overemphasised, or no worse than in other digital sectors, and therefore not a problem. Many respondents also said they did not view a diverse workforce as a means to help tackle the skills shortage in security, focusing instead on non-specific benefits. This is in spite of a growing and substantial body of evidence that proves diverse teams are a hugely important factor in building a responsible organisation.


Learning Data Science Skills Is Easier Than You Think

Futuristic Circuit Board Render
Data skills are valuable across all industries and job functions as decision making is becoming more and more data-driven, and gaining these skills isn’t as challenging as originally thought. The Burning Glass report states that “the demand for metrics — and the growing ease of measuring and visualizing them — is reshaping business practices across industries,” citing marketing and business analysis as examples. It also highlights the demand for data science and analytics skills in decision-making roles, including managers across a range of industries. So, where to start? IBM Data Scientist Joseph Santarcangelo, Ph.D., shared his expertise on getting started in data science, which starts with learning Python: “Today with data science, for a lot of it you don’t have to have a Ph.D. anymore. You don’t have to spend years and years studying something. The runway is a lot shorter this year for data science...now all you really have to know is Python and have a basic understanding of what’s going on and it’s pretty remarkable where you can go.”



Data Experts Say New Sources Must Not Replace Traditional Data


Participants added that new institutional frameworks, including legal guidelines, are needed to manage the influx of new technologies. Lisa Bersales, the first National Statistician of the Philippines, called for quality-assurance frameworks for big data and citizen-generated data. Gero Carletto, World Bank, noted that risks arise from the lack of standards for integrating different data sources. The speakers also advised caution about the “recent boom” in public-private data partnerships, suggesting that they must be managed carefully. Fredy Rodriguez, Cepei, explained the need for partnerships to establish an effective institutional framework in order to share data and determine how shared data will be used. Finally, the discussion drew attention to the evolving role of the National Statistics Offices (NSOs). Experts said NSOs’ mandate has evolved significantly in the past few years; no longer just producers of data, they are now responsible for coordinating a broad data ecosystem of entities across government, civil society, and the private sector, and for brokering new partnerships to produce, clean, compile, and analyze data to produce official statistics. In effect, NSOs have become “data stewards.”


Digital transformation: 3 ways to ease the fear factor

cio role digital transformation
Convey what the state of the business could be like without digital transformation. Understanding that the company’s future could be at risk and that their skills will become obsolete with antiquated legacy systems will likely have a significant impact on everyone. Remind employees that digital change is about designing and delivering better products and services and that this is why many people get involved with IT if the first place – to make a positive change. Positioning change in this way can help everyone see it through a different lens. Be direct and honest in all your communications, especially with employees who actively oppose change. State what the goals are, what the rollout will look like, and what the benefits will be for customers and partners as well as employees. Create a conversation and openly acknowledge concerns. Don’t shy away from difficult conversations – these are the ones employees will focus on, and failing to engage in them will drive the message that change is unpalatable.


How to use digital twins to reduce risk

Big data analytics, financial charts, business team working on computer.
In the last few years, the term "digital twin" has entered the lexicon, likely as a result of overzealous consultants applying a complicated name to a simple concept. A digital twin is nothing more than a computer simulation of something in the physical world. The Cessna I careened through the skies of Chicago on my monochrome monitor as a youth was a digital twin, just as a spreadsheet predicting next year's sales can also be a digital twin, as they both aim to simulate a future outcome using data and logic. Digital twins are incredibly valuable for the rather obvious reason that they can help you gather key insights and model potential future outcomes at a fairly low cost, thus de-risking larger investments. Consider my early experiments in flying an aircraft. For $40 or so I was able to crash my "digital twin" of a $200,000 aircraft multiple times before gathering the critical insight that I needed to pull back on the stick instead of pushing forward. In a more relevant recent example, I worked with a client who was trying to determine if the logistical costs of a complex distribution network could be sustained at a price customers were willing to pay. 


a worker fixing a power line
For many industrial networks, the highest standard of security is an "air gap," a physical disconnect between the inner sanctum of software connected to physical equipment and the less sensitive, internet-connected IT systems. But very few private-sector firms, with the exception of highly regulated nuclear power utilities, have implemented actual air gaps. Many companies have instead attempted to restrict the connections between their IT networks and their so-called OT or operational technology networks—the industrial control systems where the compromise of digital computers could have dangerous effects, such as giving hackers access to an electric utility's circuit breakers or a manufacturing floor's robots. Those restricted connections create choke points for hackers, but also for remote workers. Rendition InfoSec founder and security consultant Jake Williams describes one manufacturing client that carefully separated its IT and OT systems. Only "jump boxes," servers that bridge the divide between sensitive manufacturing control systems and nonsensitive IT systems, connected them. Those jump boxes run very limited software to prevent them from serving as in-roads for hackers.


Zero trust: Taking back control of IT security


They say: “Zero trust changes the traditional model of ‘trust, but verify’ – where you assume that any device or asset attached to your internal network is likely to be permitted and safe to access internal-only resources, but still verify that this is the case. Instead, that becomes ‘never trust, always verify’ – where every device must pass authentication and security policy checks to access any corporate resources, and to control access only to the extent required.” Trust involves an interplay between people and technology. According to Walsh and Grannells, the starting point for these trust factors is a well-thought-out and up-to-date set of policies, standards, procedures and work practices, supplemented by detailed, up-to-date network documentation and asset inventories covering information, software licences and hardware. The pair believe zero trust enables IT security to regain control. “The shift to zero trust is where information security is taking back control of the many new perimeters of the corporate ecosystem,” they say.


How do we stay smarter than our smart home devices?


It’s difficult to argue with the statement that connected devices do already enrich our lives and will continue to do so more impressively in the near and distant future. The not-so-great news? IoT manufacturers really need to step up their cybersecurity game. Many are already working tirelessly to do so, but as many are pretty much starting from scratch, they have their work cut out for them.  With more than half of companies failing to require third-party security and privacy compliance, it’s no surprise that in the past couple of years we’ve seen connected device data breaches almost double, going from 15% to 26%. Furthermore, some of these incidents encroached on peoples’ privacy in a very alarming way. Remember when Amazon’s Alexa recorded a private conversation and sent the content to a user’s random contact? Or when we’ve learned that our BFF, Roomba’s iRobot, can actually map our homes and share this information?  But with incredible devices like smart thermostats that can save us money – and even save our lives by turning off the stove if it’s on for too long – giving up on IoT because of its cybersecurity flaws is not an option.


FortiGuard Labs’ Derek Manky Talks Swarm Attacks, War of Deception

FortiGuard Labs’ Derek Manky Talks Swarm Attacks, War of Deception
Using swarm technology, intelligent swarms of bots can share information and learn from each other in real time. They could target a network, attacking multiple systems at the same time, and overwhelming the network because of the sheer number of attacks and speed at which they occur. “This is a way they could weaponize it, particularly with 5G being rolled out, which means a lot of devices can communicate really quickly together and that’s when you have a swarm,” Manky said. “You have connected devices that communicate, and if you hook up an AI system to that, those devices can launch an attack on their own. It looks quite scary.” On the bright side, organizations can still get ahead of these types of attacks, Manky said. This starts with basic cybersecurity hygiene, which, unfortunately is something many companies still struggle with. “You need a proper security architecture, segmentation,” which reduces a company’s attack surface by essentially sealing off workloads from the rest of the network, thus preventing hackers from gaining access to the wider system.



Quote for the day:


"Trust is the highest form of human motivation." -- Stephen R. Covey


Daily Tech Digest - March 14, 2020

Data Science Is Now Bigger Than 'Big Data'

Getty Images.
The now-ubiquitous term “big data” begins its meteoric rise in lockstep with cloud computing’s fall, suggesting that the public’s focus on hardware rental was rapidly replaced with how all of that computing power was being used: to analyze massive datasets. In contrast, “data science” and “deep learning” both take off in 2013 and accelerate over 2014. Interestingly, despite deep learning’s Cambrian Explosion over the past few years, search interest appears to have leveled off as of last January, perhaps suggesting that we are now searching more for the individual applications of deep learning rather than the phrase itself. Most significantly, as of January of this year, “data science” has surpassed “big data” in total search volume. Just as cloud computing’s hardware focus gave way to big data’s emphasis on what we do with all that hardware, so too has the focus shifted now from assembling huge piles of data to the people and processes making sense of all of that data. While it may be entirely coincidental, it is interesting to note that data science and deep learning burst into popularity in the immediate aftermath of Edward Snowden’s June 2013 disclosures, raising questions of whether vastly increased public awareness of data mining led to increased interest in those fields.



How to write a business continuity plan: the easy way

The most obvious reason to implement a BCP is to ensure that your organisation remains productive in the event of a disruption. Customers must still be able to use your services, employees must be able to continue doing their job and you can’t allow yourself to face a huge backlog of work as delays continue. But business continuity isn’t only about short-term goals. The cyber security landscape has become increasingly volatile in recent years, with cyber crime continuing to spiral and organisations’ reliance on technology leading to vast numbers of accidental and deliberate data breaches. As a result, organisations need to prove to customers and stakeholders that they are prepared for anything. Business continuity is especially important for OES (operators of essential services) and DSPs (digital service providers), as the delays could either be widespread or cause major headaches. To ensure that such organisations are sufficiently prepared for risks, the EU adopted the NIS Directive, which was transposed into UK law as the NIS (Network and Information Systems) Regulations 2018.


New Flat Lens Enables Focus-Free Cameras With Drastically Reduced Weight


“Our flat lenses can drastically reduce the weight, complexity and cost of cameras and other imaging systems, while increasing their functionality,” said research team leader Rajesh Menon from the University of Utah. “Such optics could enable thinner smartphone cameras, improved and smaller cameras for biomedical imaging such as endoscopy, and more compact cameras for automobiles.” In Optica, The Optical Society’s (OSA) journal for high impact research, Menon and colleagues describe their new flat lens and show that it can maintain focus for objects that are about 6 meters apart from each other. Flat lenses use nanostructures patterned on a flat surface rather than bulky glass or plastic to achieve the important optical properties that control the way light travels. “This new lens could have many interesting applications outside photography such as creating highly efficient illumination for LIDAR that is critical for many autonomous systems, including self-driving cars,” said Menon. Conventional cameras, whether used in smartphones or for microscopy, require focusing to ensure that the details of an object are sharp. If there are multiple objects at different distances from the camera, each object must be focused separately.


Open-source security: This is why bugs in open-source software have hit a record high


A large source of newly found bugs comes from Google's open-source fuzzing tools, such as OSS-Fuzz, which by 2018 had helped find 9,000 flaws in two years. As of January 2020, it's helped find 16,000 bugs in 250 open-source projects. WhiteSource found that 85% of open-source vulnerabilities are disclosed and have a fix already available. However, it notes that some users are not aware of these fixes because only 84% of known open-source bugs make it to the National Vulnerability Database (NVD). "Information about vulnerabilities is not published in one centralized location, rather scattered across hundreds of resources, and sometimes poorly indexed – often making searching for specific data a challenge," it notes. WhiteSource last year brought its vulnerability database to GitHub to support its security-alerts service. GitHub scans project dependencies for vulnerabilities in projects written in PHP, Java, Python, .NET, JavaScript and Ruby. It's helped developers find and fix millions of known flaws in dependencies. "Our concern is that, while these tools will help to report vulnerability issues in a proper manner, they will probably only aggravate the issue with software developers who are already struggling to keep up with the increased rate," WhiteSource notes.


Phishing attacks exploit YouTube redirects to catch the unwary

Phishing attacks exploit YouTube redirects to catch the unwary
Attackers are increasingly exploiting the fact that email gateways turn a blind eye to links to popular sites such as YouTube, in order to phish passwords from unsuspecting computer users. Researcher Ashley Trans of Cofense highlighted the threat in a blog post describing a recent phishing campaign. In the attack, an unsuspecting user receives an email which purports to come from SharePoint, claiming that a new file has been uploaded to his company’s SharePoint site. ... Closer examination reveals that although the link in the email does indeed point initially at YouTube (youtube.com), it also sends a series of parameters telling YouTube to redirect any traffic to a URL at <companyname>[.]sharepointonline-ert[.]pw, which in turn ultimately takes the user’s browser to its final destination: a phishing page hosted on a legitimate Google site, googleapis.com. ... The disappointing truth is that YouTube provides a method for anyone to create a link at youtube.com, which automatically redirects browsers to third-party phishing sites without any warning.


CIO interview: Miguel Rio Tinto, Emirates NBD


It has been an enormous effort and a challenging journey, with the organisation completely changed to adopt agile practices. The company is about 65% and 75% into its digital transformation cycle. “We still have some important milestones to achieve but by the end of 2020, we will be working and using the same technologies as cloud-natives and we will be comprised of 100% cloud-enabled, agile teams who collaborate with the business.” To achieve his mission, Rio Tinto has a Dubai-headquartered technology operation of around 1,200 internal and external staff. “We brought in managers from banks in Australia, Canada, the US, Turkey, Europe, India and Dubai. Half of the managers are new to organisation and over the course of 18 months, half of our engineers were also replaced,” he says. The 1,200 IT staff are arranged into 60 different sets of “squads”, which directly collaborate with business units, including retail, wholesale and enterprise.


Want  To Be A Cyber Security Pro? It Goes Way Beyond Learning To Code

cybersecurity
Learning Linux for the basics, such as terminal usage, SSH (Secure Shell), users and permissions, processes, networking, databases could be very handy as well. Those not accustomed to the Linux environment and its command line, etc. can first learn them using a bunch of web resources and tutorials to begin. Core Linux commands, input/output redirecting and piping, file manipulation, basic network configuration and user account management are some of the key things to focus here, which can be incredibly useful for security expertise later on. But as cybersecurity is a broad field, experts need to have a solid grasp on networking also and for this learner may have to spend hundreds of hours learning the nitty-gritty of company networks, and how hackers may break them to gain access to sensitive data. According to many security experts, professionals in the space may also choose to learn more about how networks and systems operate and less programming.  Network security specialists identify, anticipate and fix security threats to computer networks. They additionally perform an essential function in keeping the integrity and secrets of a company’s data and knowledge systems.


Microsoft: WSL2's Linux kernel will be delivered to Windows 10 users via Windows Update


Specifically, Microsoft has decided to remove the Linux kernel from the Windows OS image with WSL2. Instead, the company will deliver it to users' machines using Windows Update. Users will be able to manually check for new kernel updates by clicking the "Check for Updates" button or by waiting for Windows to do this automatically. "Our end goal is for this change to be seamless, where your Linux kernel is kept up to date without you needing to think about it. By default this will be handled entirely by Windows, just like regular updates on your machine," said Microsoft Program Manager Craig Loewen in a blog post today outlining the coming change. Loewen noted that initially, Windows 10 2004 users and Insider testers using Slow Ring preview builds will temporarily need to manually install the Linux kernel. They'll receive within "a few months" an update that will add automatic install and servicing capabilities. (In fact, Slow Ring testers just got today, March 13, a new Windows 10 2004 test build, 19041.153, which includes this servicing change to WSL2.)


Commission Calls for Revamping US Cybersecurity


The commission, which was mandated under the 2019 National Defense Authorization Act, is co-chaired by Sen. Angus King, I-Maine, and Rep. Mike Gallagher, R-Wis. It also includes Trump Administration officials. Its mission is to develop "a consensus on a strategic approach to defending the United States in cyberspace against cyberattacks of significant consequences," according to the report. The report lists China, Russia, Iran and North Korea as major threats to cybersecurity in the U.S., pointing at intellectual property theft carried out by Chinese operators and the election meddling carried out by Russian actors that has damaged public trust in the integrity of American elections. The report puts much of its emphasis on election security and how other countries are attempting to manipulate the vote through hacking and disinformation. "If we don't get election security right, deterrence will fail and future generations will look back with longing and regret on the once powerful American Republic and wonder how we screwed the whole thing up," King and Gallagher note in the report.


Maintaining Mental health on Software Development Teams

Work-related anxiety and mental disorders are becoming a common challenge among tech companies. According to the International Journal of Social Sciences, software developers have a considerably higher chance of experiencing fatigue, burnout, anxiety, and stress, compared to their colleagues who perform mechanical tasks. Deteriorating mental health not only threatens the wellbeing of employees, but the companies’ overall productivity. Researchers from the Institute of Software Technologies in Stuttgart found that mentally-exhausted or depressed developers produce a lower quality of code and tend to miss deadlines. Today, tech companies are realizing the importance of mental health and taking action to ensure their dedicated development teams stay healthy and sane. Here, at Beetroot, we strive to create a homely and comfortable atmosphere that minimizes the pressure felt on our teams. However, despite our best efforts, there are still challenging times. We recently spoke with our HR representative and psychologist, Vova Vovk, about mental health.



Quote for the day:


"If you are not willing to give a less experienced qualified professional a chance, don't complain you are charged double for a job worth half." -- Mark W. Boyer


Daily Tech Digest - March 13, 2020

The Digital Services Act: The Next GDPR

social media app icons on smartphone screenWhile we do not expect legislation to be complete in 2020, this year will be to a large extent where the lines around the initial proposals are drawn. Businesses need to engage now to ensure that the new Commission understands the plethora of services they are due to regulate. While the work will be led by Internal Market Commissioner Thierry Breton, it will become a joint effort across the College. With policy issues like consumer protection, disinformation, workers’ rights in the gig economy and competition also on the agenda, businesses will need to widen engagement efforts to the cabinets of Didier Reynders (Justice), VÄ›ra Jourová (Rule of Law), Nicolas Schmit (Employment) and Margrethe Vestager (Digital and Competition). Meanwhile, businesses must also be aware of the risk of the Digital Services Act becoming a belated Christmas tree bill, where policymakers in the Council and European Parliament can reopen old arguments concerning copyright or privacy. Of immediate concern to businesses is the expected consultation and communication on the scope of the DSA in the first quarter of 2020, followed by the first legislative proposals in the latter part of the year.


4 questions to determine your IT team's "electability"

group-of-people-and-communication-network-concept-human-resources-of-picture-id1196912174.jpg
Just as voters generally want candidates who reflect their values, organizations want to see an IT shop that reflects their values. For example, a financial institution that values (and needs) trust and security would suffer "organ rejection" with a technology leader that played fast and loose with security and put the overall company at significant risk. Ask yourself if your leadership style and technology organization reflect the broader company's risk appetite, speed of working and communicating, and overall culture. It's difficult to become a trusted advisor when you don't speak the same language or value the same organizational traits. While politicians who can reach across party lines seem to be an increasingly rare commodity, IT is an area ripe for cross-organizational collaboration. By virtue of working with most of the organization in some capacity, we're uniquely positioned to forge relationships that provide value to the company. Rather than acting as an order-taker who diligently implements a project for a defined stakeholder, look for opportunities to leverage the company's technology assets in new ways.


Secrets from cybersecurity pros: How to create a successful employee training program

Two Professional IT Programers Discussing Blockchain Data Network Architecture Design and Development Shown on Desktop Computer Display. Working Data Center Technical Department with Server Racks
The first step in developing a training program is finding the skills gap in your organization. Begin by determining what cybersecurity areas employees are most unfamiliar with, Papatheodorou said.  "Their needs can be assessed via an online survey, or by asking employees and managers directly," Papatheodorou said. Another avenue for preparation is looking at outcomes. "Start by deciding what outcomes you most desire, and pick the right modality of training to best meet those outcomes -- which varies per organization," Lucas said. For example, "ask the security team and leadership some questions: What are our biggest risks? What are we protecting? All of this data will help you clarify where you should start." Plaggemier said. The organization could decide to do a general cybersecurity threat overview, a basic education that could teach employees how to spot and prevent breaches. Or, depending on the company's needs, the training could be more specialized, focusing on password security, email and social media policies, and protection of company data, Papatheodorou said.


Next wave of digital transformation requires better security, automation

Binary stream passing over rows of monitors, each also displaying binary streams.
Modern networks require application services—a pool of services necessary to deploy, run, and secure apps across on-premises or multi-cloud environments. Today, 69% of companies are using 10 or more application services, such as ingress control and service discovery. Ingress control is a relatively new application service that has become essential to companies with high API call volumes. It's one of many examples of the growing adoption of microservices-based apps. Security services remain as the most widely deployed, with these in particular dominating the top five: SSL VPN and firewall services (81%); IPS/IDS, antivirus, and spam mitigation (77%); load balancing and DNS (68%); web application firewalls (WAF) and DDoS protection (each at 67%). Over the next 12 months, the evolution of cloud and modern app architectures will continue to shape application services. At the top of the list (41%) is software-defined wide-area networking (SD-WAN). SD-WAN enables software-based provisioning from the cloud to meet modern application demands. Early SD-WAN deployments focused on replacing costly multi-protocol label switching (MPLS), but there is now greater emphasis on security as a core requirement for SD-WAN.


The algorithmic trade-off between accuracy and ethics


Building fairness into algorithms requires identifying a model that minimizes unfairness. This rather tautological quest is pursued by purposely imposing restraints on the algorithm, such as equalizing the false rejection rate for bank loans across different groups of people. Deciding what these restraints should be is a chore more appropriate for leaders than for engineers — it entails human judgement, policy, and ethics. The remaining pitfalls described by Kearns and Roth are caused not so much by algorithms as by humans trying to optimize algorithmic outcomes for themselves. For instance, people who live in residential neighborhoods that offer alternative routes to traffic-jammed freeways have been known to report nonexistent accidents to the navigation app Waze to induce it to steer drivers away from them. The solution set to these pitfalls includes teaching algorithms to anticipate and adjust for efforts to game them, using concepts such as simulated self-play. Gerald Tesauro of IBM Research first applied this idea successfully in 1992, when he created a world-class backgammon program by inducing it to learn by playing itself. 


Breaking Through Three Common Engineering MythsMyth: Engineers Are Very Logical and Not Creative. This one seems to make sense – if engineers were creative, wouldn’t they have decided to be artists, writers, or some other "Fine Arts" profession? Wrong! The key word in being creative is right there – to create! Engineers create products, services, and processes that influence people every day. Whether your work goes into consumer applications, devices, or machines, the end product of engineering work is used by other people. If engineers suppressed their creativity, they would miss out on a lot of insights into ways to solve problems than they otherwise would. Every day, engineers need to find new ways to think outside the box to tackle new challenges. They have the fabulous opportunity and responsibility of imagining ways in which the world could be different and then creating ways to make that happen. That is at the heart of what creativity is all about and it should be inspiring and exciting for engineers. For example, engineering innovations have been a big part of healthcare improvement over the years. 


AI could help with the next pandemic—but not with this one


Darren Schulte, an MD and CEO of Apixio, which has built an AI to extract information from patients’ records, thinks that medical records from across the US should be opened up for data analysis. This could allow an AI to automatically identify individuals who are most at risk from Covid-19 because of an underlying condition. Resources could then be focused on those people who need them most. The technology to read patient records and extract life-saving information exists, says Schulte. The problem is that these records are split across multiple databases and managed by different health services, which makes them harder to analyze. “I’d like to drop my AI into this big ocean of data,” he says. “But our data sits in small lakes, not a big ocean.” Health data should also be shared between countries, says Inam: “Viruses don’t operate within the confines of geopolitical boundaries.” He thinks countries should be forced by international agreement to release real-time data on diagnoses and hospital admissions, which could then be fed into global-scale machine-learning models of a pandemic.


Sumo Logic: cultural process shifts should precede platform lifts

For IT teams at new companies, this approach often involves making use of cloud services and systems to quickly construct what would have previously needed armies of consultants and huge amounts of hardware to deliver. What an opportunity to make the most of modern IT. For companies with existing investments, the sheet of paper is not so blank, but it still probably has plenty of scope for development. Digital transformation projects may be more complex due to the mix of old and new technology, but they should still provide great opportunities to modernise. ... The issue here is that these individual technology elements – cloud services offering more power, applications and information sources proffering more data, analytics tools providing the ability to work with data in real time – is that they lack context. Each of these projects might be a good opportunity to modernise, but they also have to join up with each other and with how people actually work in order to succeed. To achieve this, we have to look at the processes involved, the business objectives that we are looking to meet, and what intelligence gaps exist.



The report lists two major ransomware attacks that had dramatic effects on production supply chains in 2019.  The March 19 cyberattack on aluminum producer Norsk Hydro involved LockerGoga, a previously seen ransomware tool that "halted operations at the company's corporate headquarters in Norway and impeded productivity in its extruded solutions division throughout Europe and North America."  "Analysts believe the attack marks a worrying trend, due to its international scope and direct impact on production and logistics assets," the report added. On June 7, there was another ransomware attack on Belgian aerospace supplier ASCO Industries that forced the company to shut down production lines at four different factories across North America and Europe.  The attack was so damaging that the company furloughed nearly 1,000 employees temporarily and was out of operation for more than a month. "Greater connectivity and digitalization are making manufacturing and supply chain operations more vulnerable to cyber-threats. Factories and logistics facilities can be caught in the crossfire of large-scale cyberattacks by criminals or state-sponsored groups, but they are also being targeted directly by a variety of actors," the report said.


Raspberry Pi is your new private cloud

Raspberry Pi is your new private cloud
If you’ve not guessed by now, this makes running a Raspberry Pi-based Kubernetes cluster feasible since this Kubernetes distribution is really purpose-built for the Pi, of course with some limitations. ... This enabling technology lets cloud architects place Kubernetes clusters running containers outside of the centralized public cloud on small computers that will work closer to the sources of the data. The clusters are still tightly coordinated, perhaps even spreading an application between a public cloud platform and hundreds or even thousands of Raspberry Pis running k3s. Clearly it’s a type of edge computing with thousands of use cases. What strikes me about this pattern of architecture is that cheap, edge-based devices are acting like lightweight private clouds. They provision resources as needed and use a preferred platform such as containers and Kubernetes. Of course, they have an upper limit of scalability. This is what hybrid cloud was supposed to be, but never was. Pairing a private and public cloud meant…well…you had to use a private cloud. Purpose-built private clouds fell way behind in features and functionality, so much so that enterprises are moving away from them in 2020, no matter if they are already deployed or not yet.



Quote for the day:

"It is time for a new generation of leadership to cope with new problems and new opportunities for there is a new world to be won." -- John E Kennedy

Daily Tech Digest - March 12, 2020

Stop saying employees are the weakest link in cybersecurity


Firstly, framing the conversation like this doesn’t get us anywhere. Are football players to blame when they lose a match? Well, in a way, but the players are also to ‘blame’ when they win. And even when they do lose, telling them that they’re the problem is only going to demoralize and lead to further losses. Secondly, if blame has to lie somewhere, it surely lies with the security awareness programs rather than the employees who rely on those programs to better protect themselves. The reason that human-error breaches continue to occur at such at rate is that – and let’s be honest here – security awareness training in its current form just doesn’t work. Training doesn’t work because, in most cases, it focuses solely on awareness. Awareness is all well and good, but increased awareness by itself is not what necessarily matters. Just because people are ‘aware’ of cyber risks doesn’t mean that, in the real world, they will behave in a more secure way.



Temperature check outside office building in Singapore.
The top priority for CIOs is to ensure companies can manage the huge and sudden spike in demand for remote-working capacity caused by the closure of offices and other facilities. “This required my team to make some adjustments to the way we supply necessary equipment and remote access to [our] networks,” explains Kota, who says Autodesk has created a self-service toolkit so that many more employees can quickly set themselves up to work remotely if the need arises. Nikolaj Sjoqvist, the chief digital officer of Waste Management, a $47 billion waste-management and environmental-services giant, says it has increased the number of licenses available for virtual private networks and is scaling up its networking capacity to support more remote work. Sjoqvist is also tapping cloud-based applications and services that can quickly be spun up to support the effort. For employees used to working on desktops, his team is leveraging virtual desktop infrastructure technology to give them access to applications on their personal computers.


What is data governance? A best practices framework for managing data assets


Data governance is just one part of the overall discipline of data management, though an important one. Whereas data governance is about the roles, responsibilities, and processes for ensuring accountability for and ownership of data assets, DAMA defines data management as "an overarching term that describes the processes used to plan, specify, enable, create, acquire, maintain, use, archive, retrieve, control, and purge data. While data management has become a common term for the discipline, it is sometimes referred to as data resource management or enterprise information management. Gartner describes EIM as "an integrative discipline for structuring, describing, and governing information assets across organizational and technical boundaries to improve efficiency, promote transparency, and enable business insight." ... BARC warns that data governance is not a "big bang initiative." As a highly complex, ongoing program, data governance runs the risk of participants losing trust and interest over time. To counter that, BARC recommends starting with a manageable or application-specific prototype project and then expanding data governance across the company based on lessons learned.


How Cloud, Security and Big Data Are Forcing CIOs to Evolve

Image: chrupka - stockadobe.com
Businesses are far more concerned with security, data privacy and compliance than ever before, and rightfully so. The average cost of a data breach today is $3.9 million, according to IBM. As the ever-growing wave of security and privacy incidents continues, we’ve seen legislative reactions such as GDPR or the California Consumer Privacy Act (CCPA) emerge. A decade ago, CIOs would typically manage all aspects of data security and privacy based on the advice of a dedicated information security specialist. The sheer level of regulatory, financial and reputational damage at stake has shifted those responsibilities to chief information security officers (CISO). Prominent board advisers, CISOs are responsible for mitigating security and privacy risks, maintaining compliance, and preventing incidents from impacting the business.  In the past, CIOs would typically be responsible for collecting, organizing and retroactively reporting on company data. Now, we view data as a business enabler that can highlight meaningful trends, provide predictive models and help maximize efficiency and profitability.


3 important trends in AI/ML you might be missing

outline of a human head with a chip for a brain with code for an ML model in the background
Gone are the days when on-premises versus cloud was a hot topic of debate for enterprises. Today, even conservative organizations are talking cloud and open source. No wonder cloud platforms are revamping their offerings to include AI/ML services. With ML solutions becoming more demanding in nature, the number of CPUs and RAM are no longer the only way to speed up or scale. More algorithms are being optimized for specific hardware than ever before – be it GPUs, TPUs, or “Wafer Scale Engines.” This shift towards more specialized hardware to solve AI/ML problems will accelerate. Organizations will limit their use of CPUs – to solve only the most basic problems. The risk of being obsolete will render generic compute infrastructure for ML/AI unviable. That’s reason enough for organizations to switch to cloud platforms. The increase in specialized chips and hardware will also lead to incremental algorithm improvements leveraging the hardware. While new hardware/chips may allow use of AI/ML solutions that were earlier considered slow/impossible, a lot of the open-source tooling that currently powers the generic hardware needs to be rewritten to benefit from the newer chips.


The PSD2 deadline: 8 things businesses needs to know

The PSD2 deadline: 8 things businesses needs to know image
For many commentators, the security implications of opening up account data is a top concern, but open banking poses many more challenges than this. A detailed inspection of the small print of both PSD2 and the FCA’s new guidelines for payment service providers shows that the legislation has repercussions far beyond security, that are not so well understood by many in the financial services sector. The complexities are so vast that compliance officers may even be scratching their heads in bewilderment. Here’s a few things you need to be aware of. There are now two new classes of payment service providers under PSD2. In addition to standard banks and building societies, PSD2 recognises account information service providers (AISPs) and payment initiation service providers (PISPs). The latter offer services such as bill payment and peer-to-peer transfers, by initiating “a payment from the user account to the merchant account by creating a software bridge”. The former, meanwhile, provide aggregated bank account information and analysis services. PSD2 applies to non-EU transactions where one leg is carried out by a PSP outside Europe, in addition to those taking place on EU soil.


Is your board risk-ready?


As expectations and pressures evolve, corporate directors have been taking action: adding risk committees, ensuring there is a critical mass of risk expertise on the board, measuring how much attention they pay to risks, and understanding how cultural dynamics affect risk decisions. A recent Spencer Stuart study found that 12 percent of S&P 500 companies had risk committees in 2019 — a small number, but up from 9 percent in 2014. Finance and utility companies were by far the most likely to have risk committees, in no small part for regulatory reasons. But the vast majority — more than 95 percent — of S&P 500 companies assess the performance of their board of directors annually, as do 80 percent of companies in the Russell 3000, according to a 2019 report (pdf) by the Conference Board and data-mining firm ESGAUGE. There is evidence that boards are reacting to assessments. In PwC’s 2019 Annual Corporate Directors Survey, for example, an impressive 72 percent of directors said their boards made changes in response to the last board performance assessment — up from 49 percent just three years earlier.


Hackers are working harder to make phishing and malware look legitimate


The report found that hackers are no longer using fake invoices to trick businesspeople. Now they are pretending to be company employees asking partners to take action. In December, cybercriminals compromised the account of an employee at a Chinese venture capital firm. They spoofed the domain of an Israeli startup the Chinese firm had been working with and managed to steal $1 million in funding meant for the Israeli company. Trend Micro shared an example of a BEC email caught by the Cloud App Security platform. The email supposedly from the CEO included phrases like "No one else except us must be informed at this time," and "First, provide me immediately the available cashflow of our bank account," and "As soon as I receive those information, I will share with you further instructions." Bad actors also are using new credential phishing techniques, including malicious voice mails and shared files. One phishing campaign in July 2019 used fake OneNote Online pages hosted on a SharePoint subdomain that linked to a fake Microsoft login page.


Linux Foundation open sources disaster-relief IoT firmware: Project OWL


Project OWL (Organization, Whereabouts, and Logistics) creates a mesh network of Internet of Things (IoT) devices called DuckLinks. These Wi-Fi-enabled devices can be deployed or activated in disaster areas to quickly re-establish connectivity and improve communication between first responders and civilians in need. In OWL, a central portal connects to solar- and battery-powered, water-resistant DuckLinks. These create a Local Area Network (LAN). In turn, these power up a Wi-Fi captive portal using low-frequency Long-range Radio (LoRa) for Internet connectivity. LoRA has a greater range, about 10km, than cellular networks. LoRa also avoids the danger of having its bandwidth throttled by cellular carriers. That, by the way, actually happened in 2018 in Northern California's Mendocino Complex Fire when Verizon slowed the first responders' internet.  DuckLinks then provides an emergency mesh network to all Wi-Fi enabled devices in range. This can be used both by people needing help and first responders trying to get a grip on the situation with data analytics. Armed with this information, they can then formulate an action plan.


Has an AI Cyber Attack Happened Yet?


One of the biggest ways in which we can see AI-assisted cyber attacks affecting our daily lives is through Twitter. We’ve all heard one political party or another accusing the other of using "bots" to misrepresent arguments or make it seem like certain factions had more followers than they actually did. Bots by themselves aren’t a huge deal, and lots of companies and services use bots to drive customer engagement and funnel people through different areas of the website. We’ve all seen the bot-powered chat boxes on sites where you might have a question, like the homepage of a college. But the real issue with bots is that they are becoming more sophisticated. In an ironic twist to the Turing test, it’s becoming increasingly difficult for people to tell bots apart from real people, even though machines once almost universally failed the exam. Google has recently provided higher metrics for AI-generated audio and video, demonstrating this trend. These bots can pretty easily be used for misinformation, like when users marshal them to flood a Twitter thread with false posters to influence an argument.



Quote for the day:


"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson


Daily Tech Digest - March 11, 2020

Open-source options offer increased SOC tool interoperability

interoperable gears / integrated tools / interoperability
"What we're trying to do as an industry, if we can align around a common data model and a common set of APIs, then that problem [a lack of interoperable security tools] becomes a much smaller problem than it is today," Chris Smith, senior sales engineer at McAfee, tells CSO. STIX (Structured Threat Information eXpression), contributed by IBM, is useful "if you're threat hunting and you want to query all your other tools for evidence of a certain artifact use STIXShifter to ask that question in a vendor-neutral platform agnostic language," the GitHub repo said. "STIX Shifter would be the technology that enables a company to search for an indicator of compromise across multiple tools, data repositories," Jason Keirstead, chief architect, IBM Security Threat Management, tells CSO. "If that search turns up a compromised device, OpenDXL Ontology would be the mechanism that would be used to issue alerts/notifications across other tools in order to begin remediation."



Enterprises roll out private 5G while standards, devices, coverage evolve

5G mobile wireless network
Outside of private deployments, 5G coverage remains an obstacle. All the major carriers, including AT&T, Verizon, Sprint, and T-Mobile, are promising 5G connectivity, but in practice it's limited to a few areas in the biggest cities. Consumers don't have 5G-capable phones yet, so the carriers' 5G promises are little more than marketing hype for the time being. Gartner, for example, places 5G at the "peak of inflated expectations" in its most recent hype cycle report and predicts that it will take two to five years before 5G reaches what the analyst firm calls the "plateau of productivity," when mainstream adoption starts to take off. Until that happens, many enterprises are circumventing the lack of coverage by deploying private 5G in factories, college campuses, hospitals, office buildings, or other contained environments – just as the VA Palo Alto hospital did. "We believe that enterprise deployments have the potential to be the most significant and leading set of use cases for 5G," says Dan Hays, principal and head of US corporate strategy practice at PricewaterhouseCoopers.


Details about new SMB wormable bug leak in Microsoft Patch Tuesday snafu

microsoft windows security patch tuesday
According to Fortinet, the bug was described as "a Buffer Overflow Vulnerability in Microsoft SMB Servers" and received a maximum severity rating. "The vulnerability is due to an error when the vulnerable software handles a maliciously crafted compressed data packet," Fortinet said. "A remote, unauthenticated attacker can exploit this to execute arbitrary code within the context of the application." A similar description was also posted -- and later removed -- in a Cisco Talos blog post. The company said that "the exploitation of this vulnerability opens systems up to a 'wormable' attack, which means it would be easy to move from victim to victim." ... However, there is currently no danger to organizations worldwide. Only details about the bug leaked online, not actual exploit code, as it did in 2017. Although today's leak alerted some bad actors about a major bug's presence in SMBv3, exploitation attempts aren't expected to start anytime soon. Furthermore, there are also other positives. For example, this new "wormable SMB bug" only impacts SMBv3, the latest version of the protocol, included only with recent versions of Windows.


Dump your passwords, improve your security -- really


Your first encounter with FIDO likely won't look much different than two-factor authentication. You'll first type a conventional password, then plug in or wirelessly connect a FIDO hardware security key. The process still uses passwords, but it's more secure than passwords alone or passwords bolstered by codes sent by SMS or retrieved from authenticators like Google Authenticator. This approach -- password plus security key -- is how you can use FIDO today on Google, Dropbox, Facebook, Twitter and Microsoft services like Outlook.com and eventually Windows. "Hardware security keys are very, very secure," said Diya Jolly, chief product officer of authentication service company Okta. That's why congressional campaigns, the Canadian government's computing services division and all Google employees use them. Consumer services today often require you to plug in the keys only when logging in for the first time on a new PC or phone, or when you're taking a particularly sensitive action like transferring money out of your bank account or changing your password. Of course, a security key can be a hassle if you don't have it readily available when you need it.


What is LLVM? The power behind Swift, Rust, Clang, and more

What is LLVM? The power behind Swift, Rust, Clang, and more
At its heart, LLVM is a library for programmatically creating machine-native code. A developer uses the API to generate instructions in a format called an intermediate representation, or IR. LLVM can then compile the IR into a standalone binary or perform a JIT (just-in-time) compilation on the code to run in the context of another program, such as an interpreter or runtime for the language. LLVM’s APIs provide primitives for developing many common structures and patterns found in programming languages. For example, almost every language has the concept of a function and of a global variable, and many have coroutines and C foreign-function interfaces. LLVM has functions and global variables as standard elements in its IR, and has metaphors for creating coroutines and interfacing with C libraries. Instead of spending time and energy reinventing those particular wheels, you can just use LLVM’s implementations and focus on the parts of your language that need the attention. ... LLVM’s architecture-neutral design makes it easier to support hardware of all kinds, present and future. For instance, IBM recently contributed code to support its z/OS, Linux on Power, and AIX architectures for LLVM’s C, C++, and Fortran projects.


Accelerating ML Inference on Raspberry Pi With PyArmNN

Arm NN is an inference engine for CPUs, GPUs, and NPUs. It executes ML models on-device in order to make predictions based on input data. Arm NN enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, TensorFlow, ONNX, and Caffe, allowing them to run efficiently and without modification across Arm Cortex-A CPUs, Arm Mali GPUs, and Arm Ethos NPUs. PyArmNN is a newly developed Python extension for Arm NN SDK. In this tutorial, we are going to use PyArmNN APIs to run a fire detection image classification model fire_detection.tflite and compare the inference performance with TensorFlow Lite on a Raspberry Pi.  Arm NN provides TFLite parser armnnTfLiteParser, which is a library for loading neural networks defined by TensorFlow Lite FlatBuffers files into the Arm NN runtime. We are going to use the TFLite parser to parse our fire detection model for “Fire” vs. “Non-Fire” image classification.


Instant Low Code Database Web App - ASP.NET Core 3.1 Single Page Application(SPA)


A single-page application (SPA) is defined as a web application that fits on a single web page with the goal of providing a more pleasant user experience similar to a desktop application. It can be used to create a fully blown business web application linked to a database or quickly create a web application that can traverse, search & report on a large database. The following sample application code is an alternative to using libraries such as AngularJS, React, Vue, etc. Only jQuery and bootstrap are used in conjunction with vanilla JavaScript, HTML and CSS. A very simple approach is used in overlaying div tags and Ajax calls, to read and update the database, without any Postback. The Grid and Detail forms included in this application also contain simple CSS, to make them automatically resize to any mobile device, down to iPhone, etc. Using horizontal and vertical scrolling or swiping allows the user to quickly read all data columns and rows in a Grid. Can redo Parent, Child and Grandchild CRUD grids, over and over, within seconds.


What's the difference between RPA and IPA?


IPA development and implementations are significantly more complex. The technology requires data extraction and classification, machine learning and AI to foster decision-making. Businesses using IPA will need experts on hand who have an in-depth understanding of an evergrowing set of tools and capabilities in the space. Agarwal said technical skill requirements for users are key distinctions IT executives should be aware of upfront. The technical skill required for RPA ranges from basic to mature, whereas the technical skill required for IPA ranges from mature to advanced. RPA, not surprisingly, has considerably more traction as a result of this ease of use. "There are more processes being automated with RPA than IPA," he said. Process efficiencies associated with RPA, however, are not as high as the potential efficiencies realized by IPA. Agarwal said in RPA deployments, humans continue to play a significant role in data extraction and decision-making alongside the rules-based processing handled by RPA tools. IPA, in contrast, promises greater value in reducing manual labor costs, because it automates much of the human decision-making.


Enterprises being won over by speed, effectiveness of network automation

gears / build management + automation / circuits
It's a burgeoning field: MarketsandMarkets Research reports that the global network automation market is on track to grow from $2.3 billion in 2017 to an estimated $16.9 billion by 2022. "It’s a really exciting topic in the networking industry right now because the scale and complexity of networks is really greater than it ever was before," says Brandon Butler, senior research analyst covering enterprise networks at IDC, a Framingham, Mass.-based industry analyst firm. "It's a revolution we're still in the early days of. There are more mobile workers out there, accessing high-bandwidth company apps from more diverse places. By 2025, there are going to be 41.6 billion connected IoT devices that enterprises are getting data and insights from. If your network is down, it touches everything in the company. Relying on manual, ad-hoc management isn't efficient, scalable or secure." And while it's an exciting market, it really is in its infancy, according to Andre Kindness, principal analyst at Forrester, a Cambridge, Mass.-based research firm. He notes that enterprises might be automating firewall configurations or the monitoring of their switches and traffic.


UK government survives rebellion on ‘high-risk’ comms tech supplier strategy

Though relieved, the UK’s comms industry warned that it would still take a huge hit from the decision. In January 2020, EE network owner BT warned abiding by the UK government’s decision to restrict access to kit from suppliers such as Huawei could have a potential impact of around £500m, while in February 2020 Vodafone calculated that removing Huawei equipment that exists already in its core networks across Europe would cost as much as €200m over the next five years. Such recommendations were never accepted by a core group of backbench MPs among the UK’s ruling Conservative Party, and former leader Ian Duncan Smith led a rebellion against the Telecommunications Infrastructure Bill, proposing an amendment that would lead to an outright ban on Huawei technology, which he said posed a real and direct threat to the UK’s national security. Duncan Smith’s amendment would have seen firms classified as high-risk by the National Cyber Security Centre banned entirely from the UK’s 5G project by 31 December 2022.



Quote for the day:


"Leadership should be born out of the understanding of the needs of those who would be affected by it." -- Marian Anderson