Daily Tech Digest - February 03, 2020

Why UK's Huawei decision leaves the fate of global 5G wireless in US hands

200130-5geo-07.jpg
"The UK has been doing business with Huawei for a long time through Openreach. They had been operating, with oversight, in the country for years," noted Doug Brake, who directs broadband and spectrum policy for Washington, DC-based Information Technology & Innovation Foundation. Openreach, to which Brake refers, is the division of top British telco BT responsible for deploying fiber optic infrastructure. It had been partnering mainly with Huawei until last November, when it began an evaluation process in search for additional partners. "So for the UK to come out and publicly brand them as a high-risk vendor, cordon them off to only 35 percent of the access network — not even let them into the core network," said Brake, "really puts Huawei in a tight box." For its part, Huawei did what it could Tuesday to thwart any possible interpretation of tightness, or a box. Omitting any mention of security or exploiting back doors in the infrastructure, Huawei Vice President Victor Zhang issued a statement, reading in part: "This evidence-based decision will result in a more advanced, more secure, and more cost-effective telecoms infrastructure that is fit for the future..."



Lex: An Optimizing Compiler for Regular Expressions

This perhaps isn't the fastest C# NFA regex engine around yet, but it does support Unicode and lazy expressions, and is getting faster due to the optimizing compiler. A Pike Virtual Machine is a technique for running regular expressions that relies on input programs to dictate how to match. The VM is an interpreter that runs the bytecode that executes the matching operation. The bytecode itself is compiled from one or more regular expressions. Basically, a Pike VM is a little cooperatively scheduled concurrent VM to run that bytecode code. It has some cleverness in it to avoid backtracking. It's potentially extremely powerful, and very extensible but this one is still a baby and very much a work in progress. The VM itself is solid, but the regular expressions could use a little bit of shoring up, and it could use some more regex features, like anchors.


Google launches open-source security key project, OpenSK


FIDO is a standard for secure online access via a browser that goes beyond passwords. There are three modern flavours of it: Universal Second Factor (U2F), Universal Authentication Factor (UAF), and FIDO2. UAF handles biometric authentication, while U2F lets people authenticate themselves using hardware keys that you can plug into a USB port or tap on a reader. That works as an extra layer on top of your regular password. FIDO2 does away with passwords altogether while using a hardware key by using an authentication protocol called WebAuthn. This uses the digital token on your security key to log straight into a compatible online service. To date, Yubikey and Google have both been popular providers of FIDO-compatible keys, but they’ve done so using their own proprietary hardware and software. Google hopes that by releasing an open-source version of FIDO firmware, it will accelerate broader adoption of the standard. Google has designed the OpenSK firmware to work on a Nordic dongle, which is a small uncased board with a USB connector on it.


Early use of AI for finance focused on operations, analytics


Anecdotal evidence suggests AI excels at financial processes that involve repetitive operations on large volumes of data. "It will eliminate the need for people to do a lot of the boring, repetitive work that they're doing today," Kugel said. "It will make it possible for systems to wrap themselves around the habits and requirements of the user, as opposed to the user having to adapt how they work within the limitations of technology." Data quality will also improve and, with it, the quality of analytics as AI gets better at flagging errors for people to correct, Kugel said. AI is also helping with tedious accounts payable tasks, such as confirming that goods were received and that an invoice contains the right items, Tay said. Companies that use automated payments are deploying machine learning to scan payment patterns for deviations. "If the machine learning algorithm tells them that the probability of the goods having been received and everything being good with that specific invoice, they'll pay that immediately," Tay said.


SaaS, PaaS, IaaS: The differences between each and how to pick the right one

Businessman using mobile smartphone and connecting cloud computing service with icon customer network connection. Cloud device online storage. Cloud technology internet networking concept.
In theory, PaaS, IaaS and SaaS are designed to do two things: cut costs and free organizations from the time and expense of purchasing equipment and hosting everything on-premises, DiDio said. "However, cloud computing services are not a panacea. Corporate enterprises can't just hand everything off to a third-party cloud provider and forget about them. There's too much at stake." Internal IT departments must remember what DiDio calls "the three "Cs: communication, collaboration and cooperation,'' which she said are all essential for successful business outcomes and uninterrupted smooth, efficient daily operational transactions. "When properly deployed and maintained, IaaS is highly flexible and scalable,'' DiDio said. "It's easily accessed by multiple users. And it's cost effective." IaaS is beneficial to businesses of all types and sizes, she said. "It provides complete and discretionary control over infrastructure… Many organizations find that they can slash their hardware costs by 50% or more using IaaS." However, IaaS "requires a mature operations model and rigorous security stacks including understanding cloud provider technologies,'' noted Vasudevan. IaaS also "requires skill and competency in resource management."


Startup uses machine learning to support GDPR’s right to be forgotten

“Every user has over 350 companies holding sensitive data on them, which is quite shocking,” says Ringel. “Not only that, but this number is growing by eight new companies a month, which means our personal footprint is highly dynamic and changing all the time.” According to Ringal, the conversation about data privacy needs to focus much more on data ownership. “Privacy is all about putting fences around us, preventing our personal information being shared with other people,” he says. “But the problem with that is that we miss out on the fun – every day we use online services and share our data with companies because it is convenient and efficient. Now, with GDPR, we can actually take our data back whenever we choose.” Once users know where their data is, Mine helps them reclaim it by submitting automated right-to-be-forgotten requests to the companies with the click of a button. For users on the trial version of Mine, the startup will email the request to the company and copy the user in to follow up communications.


Serverless Cloud Computing Will Drive Explosive Growth In AI-Based Innovation

Photo:
As cloud computing has advanced, more companies have made the transition to the cloud-based platform as a service model (PAAS), which delivers computing and software tools over the internet. PaaS can be scaled up or down as needed, which reduces up-front costs and allows you to focus on developing software applications instead of dealing with hardware oriented tasks. To support this shift toward the PaaS cloud, public cloud companies have begun heavily investing in building or acquiring serverless components that have pre-built unit functionality. These out-of-the-box tools allow organizations to test new concepts, iterate and evaluate without taking on high risk or expense. In the past, only large companies with considerable resources could afford to experiment with AI-based innovation. Now startups or small teams within larger enterprises have access to cloud-based, prepackaged algorithms offering different AI models that can fast-track innovation.  Let’s explore practical examples of how this trend helps democratize innovation in artificial intelligence by minimizing the time, money and resources needed to get started.


The Past, Present And Future Of Oracle’s Multi-Billion Dollar Cloud Bet

Larry had more confidence than I did. He was sure of it. I was more cautiously optimistic. We started running our little business on QuickBooks because we hadn’t built our system yet. When our system got to the point where we could run our own business’ business on it, I imported our QuickBooks file and saw our business in a browser at home. I was at home looking at all the key metrics of how we were spending, and how we were growing, and who our employees were, all there in the browser. That’s when I was sure it was going to work because I knew we were first to do that. I felt that with Larry’s strong backing we’d be able to reach a lot of companies, and that’s what happened. He was sure from the very beginning. It really was his idea to do it as a web-based application. He was the pioneer, and this was before Salesforce.com started, which he was also involved with. He wanted to do accounting, and I encouraged us to move beyond just accounting, and together we came up with this concept of the suite, and thus the name of the company, ultimately, became NetSuite.


Rogue IoT devices are putting your network at risk from hackers


Security standards for IoT devices aren't as stringent as they are for other products such as smartphones or laptops, so in many cases, it's been known for IoT manufacturers to ship highly insecure devices – and sometimes these products never receive any sort of patch either because the user isn't aware of how to apply it, or the company never issues one. A large number of connected devices are also easily discoverable with the aid of IoT search engine Shodan. Not only does this leave IoT products potentially vulnerable to being compromised and roped into a botnet, insecure IoT devices connected corporate networks could enable attackers to use something as trivial as a fitness tracker or a smart watch as an entry point into the network, and use it as means of further compromise. "Personal IoT devices are easily discoverable by cybercriminals, presenting a weak entry point into the network and posing a serious security risk to the organisation. Without a full view of the security policies of the devices connected to their network, IT teams are fighting a losing battle to keep the ever-expanding network perimeter safe," said Malcolm Murphy, Technical Director for EMEA at Infoblox.


Europe’s new API rules lay groundwork for regulating open banking


The EU and the U.K. have both passed laws that explicitly require their banks to create application programming interfaces and open those APIs to third-party developers. And banks in the U.S. should take notice. These new laws are paving the way to standardization for open banking which could lead to rapid innovation and a competitive advantage for the European banking system. These new laws are also more friendly to fintech companies as it streamlines access to a growing network of bank data. Fintechs within the U.S. must create individual data sharing agreements with each bank partner, and the negotiations for each partnership can be resource intensive. However, in the EU a fintech can get access to all bank APIs through registering as an account information service provider (AISP) or payment initiation service provider (PISP). This could create a situation where the U.S. may lose out on technology investments and see innovative financial professionals leave the nation to work in the rapidly advancing open-banking environment within the EU.



Quote for the day:


"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr


Daily Tech Digest - February 02, 2020

Just how big a deal is Google’s new Meena chatbot model?

Digital Human Brain Covered with Networks
Meena can chat, over a few turns of a conversation, believably. Meena, however, cannot reliably teach you anything. Meena is not trying to help you finish a task or learn something new specifically. It converses with no explicit goal or purpose. While we probably spend too much of our time chatting about not much of importance, we tend to be looking for something specific when interacting with a bot-powered digital service. We want to get a ticket booked or a customer support issue resolved. Or we want to get accurate information about a particular domain or emotional or psychological support for a challenge we are facing. Conversational products have a purpose, and even if they fail at the more open-ended questions, they are trying to work with you to complete a task. Meena places the human-likeness of the conversation above all. However, there is much for us to learn about what is an appropriate conversational approach given different types of tasks. There is research that shows that more “robot” like responses are preferable in certain situations (especially where sensitive personal information is involved) and that being human-like is not the end-all and be-all of bots. Where does Meena, with the conversations it has learned from social media interactions, find a role?



Bacteria cells with selective focus 157067927
Environmental IoT is one area they say could benefit. In smart cities, for example, bacteria could be programmed to sense for pollutants. Microbes have good chemical-sensing functions and could turn out to work better than electronic sensors. In fact, the authors say that microbes share some of the same sensing, actuating, communicating and processing abilities that the computerized IoT has. In the case of sensing and actuating, bacteria can detect chemicals, electromagnetic fields, light, mechanical stress and temperature — just what’s required in a traditional printed circuit board-based sensor. Plus, the microbes respond. They can produce colored proteins, for example. And not only that, they respond in a more nuanced way compared to the chip-based sensors. They can be more sensitive, as one example. ... Bacteria should become a “substrate to build a biological version of the Internet of Things,” the scientists say. Interestingly, similar to how traditional IoT has been propelled forward by tech hobbyists mucking around with Arduino microcontrollers and Raspberry Pi educational mini-computers, Kim and Posland reckon it will be do-it-yourself biology that will kick-start IoBNT.


AI still doesn’t have the common sense to understand human language


The test was originally designed with the idea that such problems couldn’t be answered without a deeper grasp of semantics. State-of-the-art deep-learning models can now reach around 90% accuracy, so it would seem that NLP has gotten closer to its goal. But in their paper, which will receive the Outstanding Paper Award at next month’s AAAI conference, the researchers challenge the effectiveness of the benchmark and, thus, the level of progress that the field has actually made. They created a significantly larger data set, dubbed WinoGrande, with 44,000 of the same types of problems. To do so, they designed a crowdsourcing scheme to quickly create and validate new sentence pairs. (Part of the reason the Winograd data set is so small is that it was hand-crafted by experts.) Workers on Amazon Mechanical Turk created new sentences with required words selected through a randomization procedure. Each sentence pair was then given to three additional workers and kept only if it met three criteria: at least two workers selected the correct answers, all three deemed the options unambiguous, and the pronoun’s references couldn’t be deduced through simple word associations.



A new bill could punish web platforms for using end-to-end encryption


The bill doesn’t lay out specific rules. But the committee — which would be chaired by the Attorney General — is likely to limit how companies encrypt users’ data. Large web companies have moved toward end-to-end encryption (which keeps data encrypted for anyone outside a conversation, including the companies themselves) in recent years. Facebook has added end-to-end encryption to apps like Messenger and Whatsapp, for example, and it’s reportedly pushing it for other services as well. US Attorney General William Barr has condemned the move, saying it would prevent law enforcement from finding criminals, but Facebook isn’t required to comply. Under the EARN IT Act, though, a committee could require Facebook and other companies to add a backdoor for law enforcement. Riana Pfefferkorn, a member of the Stanford Law School’s Center for Internet and Society, wrote a detailed critique of the draft. She points out that the committee would have little oversight, and the Attorney General could also unilaterally modify the rules. The Justice Department has pushed encryption backdoors for years, citing threats like terrorism, but they haven’t gotten legal traction. Now, encryption opponents are riding the coattails of the backlash against big tech platforms and fears about child exploitation online.


Technologies of the future, but where are AI and ML headed to?


The fluid nature of data science allows people from multiple fields of expertise to come and crack it. Shantanu believes if JRR Tolkien, being the brilliant linguist that he was, pursued data science to develop NLP models, he would have been the greatest NLP expert ever, and that is the kind of liberty and scope data science offers. ... For a country like India, acquiring new skills is not something of a luxury but a necessary requirement, and the trends of upskilling and reskilling are also currently on the rise to complement with the same. But data science, machine learning, and artificial intelligence are those fields where mere book-reading and formulaic interpretation and execution just does not cut it. If one aspires to have a competitive career in futuristic technologies, machine learning and data science have a larger spectrum of required understanding of probability, statistics, and mathematics on a fundamental level. To break the myths around programmers and software developers entering this market, machine learning involves understanding of basic programming languages (Python, SQL, R), linear algebra and calculus, as well as inferential and descriptive statistics.


Why Cybersecurity Training Needs a Serious Refresh

Cybersecurity.training
It’s easy to understand that if the technology market moves very fast, the security segments of it move even faster. This is the very definition of a dynamic environment—new dangers appear on the threat matrix every day, which means the ground is always shifting. It’s also easy to see how good security technology meets this challenge by constantly updating itself to combat new incoming threats. But here’s where it gets murky: Can we as individuals keep pace with the threats? And if we can’t, can even the most sophisticated tools ward off all dangers? No, we can’t, and that’s a big reason why the bad guys are usually ahead. Think of it as the human factor. The tools keep getting better, but inside this swirling vortex of innovation and sophistication, we as people—consumers, business professionals, and security specialists—have to scramble to understand new dangers and newer defenses. Even for tech teams dedicated to protecting the network, it’s a constant nightmare. For the rest of us, the reality is that while the threat matrix changes by the hour, IT security sessions take place maybe a few times a year, and it’s hard to even fit those into a busy schedule.


10 Key Challenges for Fintech Startups Worth Your Attention

Fintech Regulatory Issues
The financial services industry is arguably the most regulated in the world. Laws are enacted to safeguard financial systems from abuse. The emergence of fintech has changed the way we view and handle money, creating a grey area for regulation. This issue has drawn the attention of regulators and lawmakers. Therefore, fintech startups have to contend with different regulatory hiccups on day to day basis because of their unstructured operating models. Besides, regulations on fintech operations vary from one jurisdiction to the other. Therefore, startups should fully understand the legal complications before operating in a particular country. While Fintech has brought much disruption in the financial industry, banks will not just sit pretty and watch as they lose their market share. Also, fintech ventures don’t only compete with existing financial powerhouses, such as PayPal. They soon will have to contend with new players, such as Amazon and other technology behemoths foraying into financial services. Due to their strong asset base, banks wield clout and can either buy out fintech companies or partner with them. As a venture, you should decide if you want to confront the big guys head-on or if you should instead explore greener pastures.


Why we’re failing to regulate the most powerful tech we’ve ever faced

Given the force of this technology, shouldn’t governments be bracing for its effect with robust regulations? The U.S. government so far is taking a mostly hands-off approach. U.S. Chief Technology Officer Michael Kratsios warned federal agencies against over-regulating companies developing artificial intelligence. There are views, too, that the U.S. government doesn’t want to issue meaningful regulation, that the administration finds regulation antithetical to its core beliefs. There is greater movement underway by the European Union (EU), which will issue a paper in February proposing new AI regulations for “high-risk sectors,” such as healthcare and transport. These rules could inhibit AI innovation in the EU, but officials say they want to harmonize and streamline rules in the region. China is pursuing a different strategy designed to tilt the playing field to its advantage as exemplified by its standards efforts for facial recognition. Ultimately, it is in the worldwide public interest for the AI superpowers, the U.S. and China, to collaborate on common AI principles.


Great Powers Must Talk to Each Other About AI


While the dynamics of artificial intelligence and machine learning, or ML, research remain open and often collaborative, the military potential of AI has intensified competition among great powers. In particular, Chinese, Russian and American leaders hail AI as a strategic technology critical to future national competitiveness. The military applications of artificial intelligence have generated exuberant expectations, including predictions that the advent of AI could disrupt the military balance and even change the very nature of warfare. At times, the enthusiasm of military and political leaders appears to have outpaced their awareness of the potential risks and security concerns that could arise with the deployment of such nascent, relatively unproven technologies. In the quest to achieve comparative advantage, military powers could rush to deploy AI/ML-enabled systems that are unsafe, untested or unreliable. As American strategy reorients toward strategic competition, critical considerations of surety, security and reliability around AI/ML applications should not be cast aside.


Burn, drown, or smash your phone: Forensics can extract data anyway

rickayers-nist.jpg
JTAG stands for Joint Task Action Group, the industry association that formed to create a standard for the manufacturing of Integrated Circuits. The NIST study only included Android devices because most Android devices are "J-taggable," while iOS devices aren't. The forensic technique takes advantage of taps, short for test access ports, which are usually used by manufacturers to test their circuit boards. By soldering wires onto taps, investigators can access the data from the chips. To perform a JTAG extraction, Reyes-Rodriguez first broke the phone down to access the printed circuit board (PCB). She carefully soldered thin wires the size of a human hair onto small metal components called taps, which are about the size of a tip of a thumbtack. "JTAG is very tedious and you do need a lot of training," says Ayers. "You need to have good eyes and very steady hand." The researchers compared JTAG to the chip-off method, which is another forensic technique. While JTAG work was done at NIST, the chip-off extraction was conducted by the Fort Worth Police Department Digital Forensics Lab and a private forensics company in Colorado called VTO Labs.



Quote for the day:


"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman


Daily Tech Digest - January 31, 2020

What Is A Data Passport: Building Trust, Data Privacy And Security In The Cloud


What Is A Data Passport: Building Trust, Data Privacy And Security In The Cloud
One of the biggest problems with data security is that so much of our computing these days takes place not in a physical mainframe, but in the cloud. It used to be that data thieves might have to break into a physical space to steal hard drives or mainframes in order to steal data. Not any more. With more and more computing of all kind taking place in the cloud, that data can become extremely vulnerable. In fact, the movement of data between parties, through the cloud, is its most vulnerable point, and with the growing use of multi-cloud environments, the problem is only exacerbated. Imagine a single piece of information that must be transmitted from Company A to Company B. Company A knows that its servers are secure, and Company B feels like its data is also secure. But what about the “space” in between? Data passports allow the data to carry its own encryption with it, so that even if it is intercepted, it’s useless. This is extremely valuable for companies and industries that transmit data in multi-cloud environments, and will be especially useful in highly regulated industries like banking and insurance.


Getting practical about the future of work


The pace and scale of technological disruption—with its risks of unemployment and growing income inequality—are as much a social and political challenge as a business one. Nonetheless, employers are best placed to be in the vanguard of change and make positive societal impact—for example, by upgrading the capabilities of their employees and equipping them with new skills. And employers themselves stand to reap the greatest benefit if they can successfully transform the workforce in this way. Many leading businesses are realizing that they cannot hire all the new skills they need. The better solution is to look internally and develop talent they already have, as this approach is often not only quicker and more financially prudent but also good for morale and the company’s long-term attractiveness to potential recruits. We already know from our executive surveys that most leaders see talent as the largest barrier to the successful implementation of new strategies—notably, those driven by digitization and automation. 


What’s next? Modernising applications following cloud migration image
IT teams should invest in a number of tactics to optimise performance. However, the number one challenge they face is their ability, or inability, to keep the application running and resolve problems due to extremely thin teams. Visibility solves this critical problem. An overview into and across the entire application and IT infrastructure is paramount in keeping applications running to reduce MTTD (Mean Time To Detection) and MTTR (Mean Time To Resolution). Teams will have a better understanding of their current resources and scale appropriately. For example, they may discover that they have excessive server resources assigned to their application, over and above those necessary to safely run the application. Plus, they will have visibility into how cloud resources are performing (how utilised they are, are they running at the proper amount of disc space, memory, etc.) and can easily see what is and is not being used. Teams will be able to benefit from higher morale, key insights, and increased overall ownership.


The risks of state-sponsored cyberattacks are rising


The good news is that robust cybersecurity measures will ward off a state attack — and your garden variety cybercriminal. “Attacks, whether from criminal or nation-state actors, largely use the same techniques. An organization’s continual vigilance to implement and maintain cybersecurity best practices is critical,” advises Cotton. Cotton suggests that small or medium company or organizations, incorporate a “Red Team” exercise to identify employees who need additional protection or training, lest they become a spear-phishing target. Likewise, increased oversight of activities logs for such individuals would help. “When targeting critical management or operations employees of either a larger nation-state target or even their sub-contractors, the use of a smaller unconnected organization might be an easier way to infect a spear-phishing target’s home computer. Then the attack would move across the corporate VPN to the actual target of the attack,” details Cotton. He adds that some of these smaller seemingly unconnected organizations might be a local library or health-care system. Using that criteria, it’s understandable why MSPs should be concerned about state-sponsored attacks.


Cloud busting is a deployment topology in which the regular traffic is directed to an on-premise deployment by a load balancer. With increasing load, new instances are spun up in the public cloud after the traffic crosses a particular threshold and additional traffic is directed there. This model is primarily used for cost optimization. A common scenario is to provision additional infrastructure in the public cloud to handle seasonal spikes and scale back or dismantle the same after the traffic returns to normal. This often turns out to be a cheaper option compared to maintaining the same infrastructure on-premise which remains unused during relatively longer periods of regular traffic. ... Systems running in organizations' data centers experience unplanned downtimes due to various reasons often causing loss to their business. To mitigate this they plan different levels of disaster recovery strategies depending on the criticality of the system/application. Setting up a disaster recovery site requires building and operating an offsite data center with its associated costs, which often looks like unnecessary overhead. 


Diagonal chain, a blockchain concept, gray closeup
Gartner analysts suggest D&A leaders pilot blockchain smart contracts now and companies should start deploying to automate a simple process, such as non-sensitive data distribution or a simple contract formation for contract performance or management purposes. D&A leaders must immediately respond to data challenges by using cost-benefit analysis or programs won't mature enough to influence enterprise-level business strategy. D&A leaders would apply master data management (MDM) disciplines, and data-quality metrics to improve process efficiency, and driver overall higher ROI from D&A strategies. Adopt emerging tech, i.e. machine learning (ML), blockchain, smart contracts, and graph tech as a cost-effective means to increase data value and drive efficient decision-making. The report warns that if D&A management doesn't move to increase data challenges with a net positive business value proposition, or influence stagnates, then neither the company or enterprise-level D&A strategies can succeed.


Say Hello To Your New Digital Colleague


Imagine you’re currently working in a NOC (Network Operations Center), where people have to monitor multiple dashboards. They have to often escalate it to L2/L3 workers to troubleshoot incidents, make war room calls regarding root cause and then actually take the manual steps to remedy them. What if you could take some of that manual work off the NOC workers and give them a digital colleague to assist? The digital colleague can reduce the number of errors, find root cause, and file service tickets and wherever possible automate the incident resolution. Not only would it reduce operations costs, but it also frees up these workers to focus on more meaningful endeavors. That’s the concept of the digital colleague. This holds true for ITSM, ESM or any function in need of assistance. A digital colleague for Service Desk will converse with end users in natural language through multiple channels, understand the user intent, map it into service catalog, ask clarifying questions and automate resolution of up to 50% of service requests, thus taking load off operations staff.


Data Science, IoT and Reinforcement Learning in High Tech Manufacturing


The goal of reinforcement learning is to train a machine learning algorithm to achieve a goal by outputting a particular sequence of outputs for a given sequence of output. The rule that the machine learning algorithm uses to map inputs to outputs is called the policy. It is the goal of the machine learning algorithm to randomly explore the solution space until it finds a policy that allows it to achieve it’s intended goal. This can require the algorithm to run for much longer than it would do if the algorithm in use was supervised. Reinforcement learning is also being explored in the industrial robotics sector to try to assist industrial machines to handle industrial goods. Handling and moving an industrial goods usually involves a large number of individual movements from an industrial robotic arm. The movements are very difficult to pre-program using convectional programming techniques because of the large number of individual sequential movements required. Research on robotics powered by reinforcement learning is now being seriously explored. Other emerging cutting edge applications of reinforcement learning includes in the allocation or subdivision of computing resources between many different industrial machines.


Why good cybersecurity in business is everyone’s responsibility

A lock hangs on a fence along the Cliff Walk in Newport, Rhode Island July 14, 2015.
Creating a framework for managing risk that can be understood across the organization, even by non-cybersecurity professionals. It doesn’t need to be a comprehensive measurement of all risks, but it should use risk indicators that are representative of the main risk areas so as to provide both an overall barometer of cybersecurity risk and to ensure its kept as part of the business conversation. Making sure cyber is part of the dialogue at the highest levels of the organization. If the CEO talks about phishing awareness, there’s a good chance this will become a priority at all levels. Creating a security instruction and awareness function and appointing a senior leader responsible for running security awareness campaigns and overseeing security training. This executive should be empowered to work with colleagues across various business functions to design programmes that address the needs of different employee specialities.


5G and smart cities Q&A: What role will telcos play?

5G and smart cities Q&A: What role will telcos play? image
Emergency services and policing will also be impacted. With cities growing at the rate of 1.5 million new citizens every day, 5G services will allow police forces to better monitor the environment of the city through automation, in order to provide more efficient services, safety to the public and cities in line with environmental targets. Further to this, autonomous vehicles, high footfall areas, carbon emission levels, safety and new road and pedestrian planning will all benefit from enhanced monitoring services thanks to 5G. Lastly, there are the environmental use cases to consider. Power supply and lighting will change as a result of 5G, making the lighting of cities and distribution of energy through smart grid systems more efficient. Telecoms providers are essential to this. For a start, they can assist with connecting those who are generating energy back into the grid, which is vital for the two-way purchase and sale of energy to succeed. Substations will need higher capacity and faster connections — provided by fibre — in order to facilitate this flow.



Quote for the day:


"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson



Daily Tech Digest - January 30, 2020

IT pros need to weigh in on that ‘sassy’ security model

access control / authentication / privileges / security / key
Cloud and SaaS adoption by enterprises has changed network traffic patterns, requiring fundamental change in network and security architectures. As Gartner notes, the role of the enterprise data center has changed dramatically. More user traffic goes to cloud services than to those data centers, and more workloads run in IaaS than the data centers. Cloud services contain more sensitive data than enterprise data centers. The use of the enterprise network has also changed, with more user work done off the network than on, and more applications accessed via SaaS than the enterprise, Gartner says. So, controlling access and applying security policies based on the user, device and application that are connecting to the network makes more sense than focusing access control on the data center. Advances in network/security software and cloud intelligence have enabled new solutions which are quick to deploy, scalable, flexible and simple to manage such as SD-WAN, SD-Branch and CASB. Edge computing and IoT applications require distributed, low-latency networking and security that are likely to be delivered as cloud-based services.



JetBrains taps machine learning for full-line code completion
JetBrains has laid out a 2020 roadmap for IntelliJ IDEA and its IntelliJ-based IDEs. The promised new capabilities range from additional machine learning driven code completion to collaborative editing. The company said the additional machine learning based code completion capabilities would make better use of the context for ranking completion suggestions and generate completion variants that go beyond a single identifier to provide full-line completion. Considered a major area of investment, full-line completion may take a while to appear in the product. JetBrains already had been exploring the use of machine learning for code completion, and some results of that research have made their way into products. IntelliJ now uses machine learning to improve the ranking of completion variants, and language plug-ins tag each produced completion variant with different attributes. IntelliJ also uses machine learning to determine which attributes contribute to item ranking so the most-relevant items are at the top of the list. In addition to machine learning based code completion, JetBrains cited a multitude of improvements to IntellIj for 2020, subject to change.



There are certain considerations when it comes to edge virtualization. For example, admins must determine whether their data centers are ready for edge virtualization and if they require a complex instruction set computing (CISC) processor or reduced instruction set computing (RISC) processor. However, edge virtualization can ease device management, introduce reduced costs and manage vast amounts of data, all of which significantly benefit modern data centers. A main benefit of edge virtualization is device management. In implementing virtualization at the edge, admins can track resources, monitor performance and ensure the health of their systems to better control their edge devices. Admins can use VMware ESXi to control their edge devices. This is beneficial because ESXi provides added isolation, which helps increase the security of edge devices. In addition, hypervisors such as ESXi help to ensure each VM within a network has the resources required to perform efficiently.



EU implements 5G infrastructure restrictions on ‘high-risk’ suppliers


The EU sees closely coordinated implementation of the toolbox as indispensable to ensure EU businesses and citizens can make full use of all the benefits of the new technology in a secure way. “We can do great things with 5G,” said Margrethe Vestager, executive vice-president for a Europe Fit for the Digital Age. “The technology supports personalised medicines, precision agriculture and energy grids that can integrate all kinds of renewable energy. “This will make a positive difference, but only if we can make our networks secure. Only then will the digital changes benefit all citizens.” Thierry Breton, commissioner for the EU Internal Market, added: “Europe has everything it takes to lead the technology race. Be it developing or deploying 5G technology, our industry is already well off the starting blocks. Today we are equipping EU member states, telecoms operators and users with the tools to build and protect a European infrastructure with the highest security standards, so we all fully benefit from the potential that 5G has to offer.”


Google looks ahead to the next decade of AI research


While Google and the industry at large have made significant strides in AI in the past few years, public awareness of the technology's potential drawbacks -- and corresponding regulation -- is only now beginning to catch up with the industry. Google has, in turn, started talking more about the ethical guidelines it applies to its AI research. About a year and a half ago, the company released a set of principles to help guide its development of AI applications. Google also committed to refraining from building AI for technologies that could cause harm, such as weapons.  "As we start to think about how these systems and this research gets out into the world, it's really important for us to think about what are the implications of this work, and how should we be thinking about applying it to certain kinds of problems, and the problems we shouldn't be applying it to," Dean said.  While it's easy to look at Google's commitments and scratch "weaponized drones" off its list of technologies to build, there are plenty of other AI-driven technologies -- even seemingly innocuous ones -- that could cause harm.


Simulating Agile Strategies with the Lazy Stopping Model

The "Lazy Stopping Model" therefore just reflects the idea that we choose how much information to gather before taking an action. If we gather less than we "should", for some reason, then we can say that the agent (a simulated person or organisation) has stopped gathering info and is taking action before it should. But in practice, it may be impossible to avoid "lazy stopping," which is where agile strategies come in. Agility is mainly a defensive strategy against your own ignorance. It’s about dealing with the costs of previous decisions by either failing fast and thereby learning quickly, and/or by lowering the costs of adjustments and re-working them when you learn that what you had built or deployed at first is not quite right. This includes creating an environment and office culture where that is OK and expected, as long as you also learn quickly. In contrast, to maximise efficiency, a more offensive strategy would need to be used when you are confident you have enough information to act quickly in order to maximise your advantage over competitors. 


Data privacy: Top trends to watch in 2020

Flat illustration of security center. Lock with chain around lap
Technology (AI and ML) is being "blamed" for our current data privacy imbroglio, but technology is what can help solve it as well. Privacy enhancing technologies (PETs) represent a new, emerging category of technologies, and are increasingly being used to protect data privacy while enabling data use. Prior to the emergence of PETs, previous solutions tended to rely mostly on de-identification and anonymization, which usually involved removing personally identifiable information(PII) fields from data sets. However, anonymization technologies have been rendered insufficient by the advancements in AI and machine learning capabilities, which enable re-identification of anonymized data. PETs in the realm of secure computing, such as homomorphic encryption, multi-party computing (MPC), zero knowledge and differential privacy are introducing new paradigms for protecting various modalities of data usage. For example, my company, Duality Technologies, enables data science computations to be performed on encrypted data, which allows sensitive data to be analyzed and processed by our customers' partners while remaining protected.


Using Azure AD conditional access for tighter security


Legacy authentication is used for many types of attacks against Azure AD-based accounts. If you block legacy authentication, then you will block those attacks, but there's a chance you'll prevent users trying to perform legitimate tasks. This is where Azure AD conditional access can help. Instead of a simple off switch for legacy authentication, you can create one or more policies -- a set of rules -- that dictate what is and isn't allowed under certain scenarios. You can start by creating an Azure AD conditional access policy that requires modern authentication or it blocks the sign-in attempt. Microsoft recently added a "report only" option to conditional access policies, which is highly recommended to use and leave on a few days after deployment. This will show you the users still using legacy authentication that you need to remediate before you enforce the policy for real. This helps to ensure you don't stop users from doing their jobs. However, this change will severely limit mobile phone email applications.


Oracle customers complain of cloud coercion


In a number of instances, the Itam Review found that Oracle customers were being coerced into buying cloud services. “We have been in an audit situation three years ago,” one user told the Itam Review. “Even though we had been licensed properly, due to mergers and acquisitions, Oracle figured out that the licenses were not properly ‘transferred’ to the new companies. Oracle then threatened us with a fine of over €150,000.” The user then said that Oracle offered to waive the penalty if €50,000 of Oracle cloud licences were purchased instead. “We agreed to do so, fixed everything, got that certificate of compliance,” the user said. “We never used that Oracle cloud because we did not need it and because that cloud was not technically effective.” For Thompson, the poll illustrates the challenges that Oracle faces as it tries to establish itself as a major cloud provider in a market dominated by AWS, Microsoft Azure, Alibaba and Google Cloud.


Cisco offers on-prem Kubernets-as-a-Service to challenge public cloud

hyperconverged
The HXAP is designed to take the hard work out of Kubernetes and make it as easy as deploying an appliance, said Liz Centoni senior vice president and general manager of Cisco Cloud, Compute, and IoT. “We integrate the Kubernetes components and lifecycle-manage the operating system, libraries, packages and patches you need for Kubernetes. Plus, we manage the security updates and check for consistency between all components every time you deploy or upgrade a cluster. We then enable IT to deliver a Container-as-a-Service experience to developers – much like they are used to getting in the public cloud.” ... As part of the HXAP rollout, Cisco said instances can also be installed, operated and managed via its Intersight platform, the cloud-based management package for its Unified Computing System (UCS) and HyperFlex computing environments.  “Intersight also adds management and monitoring of virtual machines and containers allowing operators to create, expand and upgrade Kubernetes clusters from the cloud. With the addition of the Intersight Mobile App customers can also manage and monitor their global infrastructure and container footprint from the palm of their hand,” Venugopal wrote.



Quote for the day:


"Leaders are readers, disciples want to be taught and everyone has gifts within that need to be coached to excellence." -- Wayde Goodall


Daily Tech Digest - January 29, 2020

3 Key Success Factors for Improving Test Automation Outcomes
While building automation-ready test designs and ensuring system testability are key steps to achieving automated testing, businesses can take their automation to the next level by implementing continuous testing (CT). Continuous testing can lead to faster feedback, quicker release turnaround, and higher customer satisfaction and loyalty, giving businesses the best chance of not only surviving the future of software delivery but thriving in it. Because continuous testing is the most advanced form of testing, it is also the most challenging. ... Testers should also consider the test agents when provisioning test machines (on a PC, laptop, etc.) or virtual machines (in the cloud, containers, etc.). Once the machine is set, the dispatcher should efficiently distribute the tests, and any developer or tester should kick off the tests to run in parallel. This way, if there is more than one development or scrum team with different needs, they can run automated tests without waiting for another team to complete their run.


Businesses Improve Their Data Security, But Privacy — Not So Much

"Privacy is one of the major pieces of collateral damage that no one talks about in our reaction to September 11," he says. "It set us on a path to use data and the Internet as a tool to combat terrorism, and I understand why, rather than really moving forward on where the President's instincts were on putting the consumer first." For the past decade, companies have been focused on dodging online criminals — and then nation-state actors — intent on stealing data. With the passage of the GDPR, focusing on data security became a business imperative to avoid larger fines. Yet the policy discussion and legal landscape have become more nuanced, says Ackerly. Companies are beginning to understand that customers want privacy, he says. "I am optimistic as I've ever been on this journey that we will end up in a place where individuals will be able to take control over their data where ever it is shared," Ackerly says. "I think it is a combination of technology evolving and society just waking up to the trade-offs that we have made over the past 15 or 20 years."


SaaS transformation: Tax Automation partners with Pulsant to reduce costs and improve delivery

SaaS transformation: Tax Automation partners with Pulsant image
Initially VAT Controller was hosted on dedicated servers in one of Pulsant’s data centres, with failover to another of its data centres in the event of unplanned downtime. With the development of Pulsant Enterprise Cloud, a combination of private cloud and managed hosting, Tax Automation worked with Pulsant to migrate to this hosting model because it features the best hardware, robust cloud delivery and proven management software. In the next phase of the relationship, Tax Automation worked with the Pulsant team to find the best way to host its Capital Assets Database software and make it quicker and easier to access for clients, without hosting the infrastructure on its own site. The Capital Assets Database is a client-server application, which necessitated the need for a more robust hosting solution. The software is accessed through a user interface, in this case Citrix, and not just a web browser. As a result, Pulsant needed to ensure a smooth integration between the hosted server and user interface. Rainer and his team worked with Pulsant to devise the most appropriate solution to meet requirements.


Opinion: ‘Scale’ is the magic word for digital transformation in 2020

Smart manufacturing requires convergence between IT and OT data to drive visibility, collaboration, and efficiency within plants and facilities and across operations. However, two decades after automation networks on the plant floor became ubiquitous, it’s still generally true that information accessibility between plant floor devices (OT) – and the people and systems that can create new value from them (IT) – proves to be a significant challenge. To remove the complexity and domain expertise required to access plant floor devices and systems, manufacturers are turning to auto-discovery tools that identify assets, collect and integrate data with full OT context, and produce models fully shareable with IT systems. By connecting existing OT infrastructure to smart factory networks and IT initiatives, and continuously generating relevant data insights and measurements, auto-discovery capabilities reduce the technical knowledge and time needed by OT teams to map industrial infrastructure and improve operational efficiency.


There are two types of Unsupervised Learning: discriminative models and generative models. Discriminative models are only capable of telling you, if you give it X then the consequence is Y. Whereas the generative model can tell you the total probability that you’re going to see X and Y at the same time. So the difference is as follows: the discriminative model assigns labels to inputs, and has no predictive capability. If you gave it a different X that it has never seen before it can’t tell what the Y is going to be because it simply hasn’t learned that. With generative models, once you set it up and find the baseline you can give it any input and ask it for an answer. Thus, it has predictive ability – for example it can generate a possible network behavior that has never been seen before. So let’s say some person sends a 30 megabyte file at noon, what is the probability that he would do that? If you asked a discriminative model whether this is normal, it would check to see if the person had ever sent such a file at noon before… but only specifically at noon. 



Having the right data at the right time and with the right level of confidence at the point of use is priceless, but all these unexpected, unannounced and unending changes to data, collectively termed data drift, is beyond our control and leads to operational risk. While still in its early days, I believe that in 2020, we will see more pervasive interest in DataOps. DataOps is the set of practices and technologies that brings the end-to-end automation and monitoring sensibilities of DevOps to data management and integration. But what makes it DataOps are drift-resilient smart data pipelines, from which living, breathing end-to-end data topologies emerge. Instead of ignoring or fighting data drift, DataOps embraces and harnesses it to speed up data analytics, with confidence. Some indicators that we’ve noticed here at StreamSets include a small, but burgeoning cross-section of customers that are embracing DataOps approaches. The recent DataOps Summit highlighted many of their use cases and resulting business impact. Searches for the term “DataOps” have tripled, vendors are entering the space with DataOps offerings, and we’re seeing a number of DataOps business titles appearing on LinkedIn profiles.


NHS suffers fewer ransomware attacks, but threat persists


Nevertheless, noted Bischoff, the decrease in attack volumes appears, at face value, to show that the money invested in security, coupled with the launch of NHSX, since the WannaCry attacks has had the desired effect, to some extent. Even so, the NHS spends only about 2% of its total budget on IT, compared with 4-10% in other sectors, according to Saira Ghafur, digital health lead at Imperial College London’s Institute for Global Health Innovation. So the health service still needs more funding to replace ageing infrastructure and secure both endpoint devices and connected healthcare equipment, she said. Speaking at a think-tank event on security in October 2019, Ghafur said the NHS faced other security challenges, particularly around skills. “We can’t compete with other sectors in terms of attracting cyber security professionals – we need to work with the industry to attract them into healthcare – and all NHS staff need better education in terms of risks,” she said.


Which cloud strategy is right for your business in 2020?

Which cloud strategy is right for your business in 2020? image
Hybrid cloud shouldn’t be an afterthought — it should instead be viewed as a fundamental design principle upon which vital infrastructure building blocks are built. We recommend beginning with a hybrid cloud infrastructure built from the ground up to ensure flexibility and choice. Companies need to consider developer enablement and productivity, allowing them to build and deploy apps to a hybrid cloud. After all, the needs of an app might change, from on premise today to public cloud tomorrow. In line with this, management tools that orchestrate workloads and automation tools to simplify day-to-day operations are essential to delivering the complete value of a hybrid cloud. Making a conscious decision to retain on-premise data centres and continuing to invest where it makes sense — for specific workloads — is often an important part of a hybrid cloud strategy. It’s not about having on-premise data centres, adopting a bit of public cloud, ending up with two and calling that hybrid cloud.


Implications of Using Artificial Intelligence in Film Studio Decision-Making

AI and the Auteur: Implications of Using Artificial Intelligence in Film Studio Decision-Making
Predictive analytics could be run against a variety of factors concerning potential actors, screenwriters, and directors: such characteristics could include one’s gender, age, race, ethnicity, disability or impairment, sexual orientation, and so on. As we have seen from studies on machine learning in the criminal justice context, algorithms can perpetuate human biases. It is foreseeable that the AI could become path dependent, err on the side of caution, and fail to account for cultural shifts in audience attitudes. For instance, of the top 100 top grossing movies made between 2017 and 2018, only 33 per cent of all speaking or named characters were girls or women. If this metric were analysed in isolation, it is not impossible to consider that a machine learning algorithm would lean towards viewing male protagonists as a safer choice for higher profits. As Norwegian filmmaker Tonje Hessen Schei told Screen Daily, a concern with this new process is that it may become “harder and harder to get a diverse span of voices that will be heard in the market.” The legal implications or responses on this point are somewhat unclear.


Securing Containers with Zero Trust

From the perspective of a firewall, all it would see is a packet coming from that host, a machine it has been told to trust. It will allow that packet, which in turn allows attackers to exfiltrate data, encrypt the data, or use SQL itself to move further across the network toward their target. Now let's add a second container to the host. In a Docker classic environment, all the containers are network-address translated to look like the host, so it's impossible to determine where the traffic originated. In a bridging scenario, there are multiple ways to impersonate the Java microservice inside the container. And just as with other network plug-ins, the Linux machine serving as the host has a large network attack surface. ... If the purpose of a policy is to only allow this specific Java microservice to communicate with a SQL database, in a firewall model, this all has to be transformed into a long series of network addresses, which have to change on the fly as the network infrastructure itself changes.



Quote for the day:


"Give whatever you are doing and whoever you are with the gift of your attention." -- Jim Rohn


Daily Tech Digest - January 28, 2020

Data Protection Day 2020: What goals should companies be aiming for?

Data Protection Day 2020: What goals should companies be aiming for? image
“As 5G continues to roll out globally, everything and everyone will become more connected than ever,” said Stan Lowe, global CISO at Zscaler. “IoT devices in the streets and in the home will all become connected with 5G. Our Alexa, our Google Home, our car and practically everything else will be constantly harvesting data and forwarding it to corporations for marketing purposes and to build your digital profile. ... “It is no secret that up-and-coming companies innovate at a faster rate than governments can introduce regulations, such as GDPR,” he said. “With laws and governmental bodies usually about five to six years behind the innovators, who are constantly innovating on ways to use the data that they harvest, the onus is on these companies to use the data they collect in a safe, fair and ethical way. “Ultimately, our data is a tradeable commodity, and corporations have a lot of power when it comes to how they use the data they collect. ...”


Software developers can create better programs with AI

istock-904420104-1.jpg
Requirements management is the process of gathering, validating, and tracking the requirements that end users have for a program. But if mismanaged, this process can cause software projects to go over budget, face delays, or fail entirely. Using AI, digital assistants can analyze requirements documents, find ambiguities and inconsistencies, and offer improvements. Powered by natural language processing, these tools can detect such issues as incomplete requirements, immeasurable quantification (missing units or tolerances), compound requirements, and escape clauses. Companies using such tools have reportedly been able to reduce their requirements review time by more than 50%, according to Deloitte. As developers type, AI-powered code completion tools can serve up recommendations for finishing lines of code. Some tools can even display a list of usable code snippets based on relevance. AI-powered code-review tools can understand the intent of the code and look for common mistakes, thereby detecting bugs and suggesting code changes. Video game company Ubisoft says the use of machine learning is helping it catch 70% of bugs prior to testing.


Neural Architecture & AutoML Technology

AutoML
AutoML focuses on automating each part of the machine learning (ML) work process to increase effectiveness and democratize machine learning so that non-specialists can apply machine learning to their issues effortlessly. While AutoML includes the automation of a wide scope of problems related with ETL (extract, transform, load), model training, and model development, the issue of hyperparameter enhancement is a core focus of AutoML. This issue includes configuring the internal settings that govern the conduct of an ML model/algorithm so as to restore a top-notch prescient model. Creating neural network models frequently requires noteworthy architecture engineering. You can sometimes get by with transfer learning, yet if you truly need the most ideal performance it’s generally best to structure your very own network. This requires particular skills(read: costly from a business point of view) and is challenging in general; we may not know the cutoff points of the present cutting edge methods! It’s a ton of experimentation and the experimentation itself is tedious and costly.



LoRaWAN Encryption Keys Easy to Crack, Jeopardizing Security of IoT Networks


The LoRaWAN protocol defines two layers of security: one at the network level and another at the application level, researchers described in the report. The network-level security ensures the authenticity of the device in the network, providing integrity between the device and the network server, they wrote. The application-layer security is responsible for confidentiality with end-to-end encryption between the device and the application server, preventing third parties from accessing the application data being transmitted. Each layer of protection depends on the security of two encryption keys–the Network Session Key (NwkSKey) and the Application Session Key (AppSKey), both of which are 128 bits long. These keys are “the source of the network’s only security mechanism, encryption,” and thus, once cracked, basically give hackers an open invitation to the devices and networks being protected by them, researchers noted.


CEOs are deleting their social media accounts to protect against hackers


"It's clear that cybercrime continues to grow as an issue for CEOs around the world, meaning that for many, the threat to their margins, their brands and even their continued existence from cyber attacks is no longer an abstract risk that can be ignored," said Richard Horne, cybersecurity chair at PwC. "Criminals are becoming more adept at monetizing their breaches, with a sharp rise in ransomware attacks this last year. They can have a devastating impact on the organisations they hit, as seen in many high-profile cases". The boardroom itself isn't immune to cyber crime as attackers will target executives - and the PwC report found that almost half of CEOs are taking action to make themselves less vulnerable to cyber attacks. It said 48 per cent CEOs surveyed said the risk of cyber attacks had caused them to alter their own personal digital behavior, such as deleting social media accounts or virtual assistant applications or requesting a company to delete their data Social media accounts could potentially be targeted by criminals as a means of gaining access to personal information about victims, while there have been privacy concerns about virtual assistants and their ability to enable unwanted eavesdropping.


More people would trust a robot than their manager, according to study

Delegating that low-level work to a machine provides more time for the things that matter. “Machines will elevate the manager’s experience,” says Emily He, senior vice president of Human Capital Management at Oracle. “People see the difference between artificial intelligence and human intelligence, and they want from their managers things that machines can’t provide—things like empathy, personalized coaching, and career advice.” Imagine a workplace where managers, unencumbered by work that machines can do, can focus on people. “AI and machine learning are going to bring humanity back to the workplace,” says He. “[In] the last hundred years the advance of technology has made the workplace less human because the interface with technology has not been very natural. With AI, humans can go back to what is distinctly human and what they enjoy doing, which is to connect with each other, work on projects together, and generate new ideas.” Managers need to be prepared for this new world that will demand effective leadership.


How to build ethical AI

How to build ethical AI
One of the most difficult questions we must address is how to overcome bias, particularly the unintentional kind. Let’s consider one potential application for AI: criminal justice. By removing prejudices that contribute to racial and demographic disparities, we can create systems that produce more uniform sentencing standards. Yet, programming such a system still requires weighting countless factors to determine appropriate outcomes. It is a human who must program the AI, and a person’s worldview will shape how they program machines to learn. That’s just one reason why enterprises developing AI must consider workforce diversity and put in place best practices and control for both intentional and inherent bias. This leads back to transparency. A computer can make a highly complex decision in an instant, but will we have confidence that it’s making a just one? Whether a machine is determining a jail sentence, or approving a loan, or deciding who is admitted to a college, how do we explain how those choices were made? And how do we make sure the factors that went into that algorithm are understandable for the average person?


Will blockchain deliver across industries? Or will the tech fall flat?

Will blockchain deliver across industries? Or will the tech fall flat? image
“With the ability to span across multiple industries, it ensures products can be traced, authenticated and verified on digital ledgers. In the pharmaceutical industry, organisations can apply blockchain technologies to ensure tailored drugs are delivered to the right person. By utilising a secure IoT platform to make sure medications are the right quality and don’t fail during the supply process, which can ultimately affect the efficacy when taken by the patient. “Through blockchain, these companies are able to verify where their product has travelled, and which components have been added at each transition point. In industries where each product can use components from tens or even hundreds of companies at one time, blockchain technologies ensure that the whole supply chain is more transparent, accountable and secure.” ... There are challenges, however, as Akber Datoo — CEO of D2 Legal Technology — highlights.


AI Facial Recognition and IP Surveillance for Smart Retail, Banking, and the Enterprise

AI Facial Recognition and IP Surveillance for Smart Retail, Banking, and the Enterprise
As advances and innovation around Facial Recognition technology continues to evolve even more, one of the latest trends come from CyberLink's FaceMe® AI Facial Recognition engine integrated into Vivotek's IP surveillance solutions of network cameras and back-end video management software. This integration enable security operators to receive accurate Facial Recognition alerts based on both blacklists and whitelists. According to Dr. Jau Huang, CyberLink's Founder and CEO, "the demand for Facial Recognition is booming, driven by the latest IoT and AIoT innovations, and are enabling a wide array of scenarios across industries such as security, home, public safety, retail, banking, and more." He says that each application is dependent on the performance of the cameras used to capture faces and by integrating FaceMe into Vivotek's surveillance devices it is possible to bring accurate and reliable new solutions into the market.


Privafy claims ‘fundamentally new’ approach to mobile data security

“Data has never been less secure,” said Privafy co-founder and CEO Guru Pai. “Solutions developed by the networking industry to protect data are rapidly becoming obsolete for today’s cloud-and mobile-based workloads. “Also, technologies such as SD-WAN and cloud-based point solutions focus more on cost reductions, but don’t address the underlying security vulnerabilities to sufficiently protect internet-reliant businesses. Privafy was purpose-built to secure data in today’s modern world. We have democratised internet security to protect data in a way that is easier to deploy and far more economical for any-sized enterprise, regardless of where or how it works.” Pai cited a Gartner research document, The future of network security is in the cloud, which noted that digital business transformation inverts network and security service design patterns, shifting the focus to the identity of the user and/or device, and not the datacentre. The report said the idea of the legacy datacentre as the hub of business network and network security architecture was obsolete and had become “an inhibitor to the needs of digital business”.



Quote for the day:


"Without growth, organizations struggle to add talented people. Without talented people, organizations struggle to grow." -- Ray Attiyah