Daily Tech Digest - July 10, 2019

edge computing, Linux, Red Hat
One reason why edge computing defies easy definition is that it takes many different forms. As Jaromir Coufal, principal product manager at Red Hat, recently pointed out to me, there is no single edge. Instead, there are lots of edges – depending on what compute features are needed. He suggests that we can think of the edge as something of a continuum of capabilities with the problem being resolved determining where along that particular continuum any edge solution will rest. ... Done properly, edge computing can provide services that are both faster and more reliable. Applications running on the edge can be more resilient and run considerably faster because their required data resources are local. In addition, data can be processed or analyzed locally, often requiring only periodic transfer of results to central sites. While physical security might be lower at the edge, edge devices often implement security features that allow them to detect 1) manipulation of the device, 2) malicious software, and 3) a physical breach and wipe data.



Deep Learning for Computer Vision: A Beginners Guide

What distinguishes deep learning is that its networks contain many hidden layers. This extra complexity empowers machines to learn from unstructured, unlabeled data as well as labeled and categorized data. Note that none of these concepts are particularly new — rapid advances in computing power and technology enables the models to be fed with large volumes of data. The more data available, the more proficient the models become at learning tasks. Speech recognition, image recognition, natural language processing (NLP), and computer vision are some of the areas deep learning has improved dramatically. Many technology companies now specialize in providing platforms for training deep learning models in computer vision and other areas. Such companies have also facilitated further innovation in these artificial intelligence branches. ... The most exciting potential use for this computer vision function is real-time semantic segmentation used by self-driving cars. Identifying and localizing objects accurately can improve the safety and reliability of autonomous vehicles.


How ASEAN firms are turning data into critical assets


Rouam said SGX is now looking to grow that figure by providing new data services, such as data visualisation, to external customers, effectively extending the value of its data beyond internal use. “We have data teams that use data for internal purposes, such as analysing customer behaviour as well as meeting regulatory and operational requirements,” said Rouam. “To turn data into a critical asset, we did a lot of work on the technology and legal processes.” This includes building a centralised database with in-memory capabilities to ensure timely access to machine-readable data for data scientists. SGX has also built a logical layer to help business users understand the data, along with a data dictionary and business glossary. At SP Digital, the digital services subsidiary of Singapore utility provider SP Group, data has always been critical to its operations, enabling it to control and manage the country’s critical energy infrastructure. About three years ago, the company embarked on a digital transformation initiative, which, among other areas, includes building a data lake to house all its data, according to Chang Sau Sheong, CEO of SP Digital.



Applied AI in Software Development

When it comes to validating the property of an object present on the UI, TestComplete provides multiple options to access the object’s properties. For example, a button object can have properties like enabled or disabled, text, coordinates, Id, class, etc. Hence, it becomes easy to identify an object based on these properties and confirm the expected behavior. However, reading content on images or a graphical chart like interfaces which are becoming more common with the proliferation of business intelligence and data-driven dashboards is difficult to identify and validate or perform automated actions on them. However, this has changed with the introduction of the latest version of TestComplete viz. version 12.60 which overcomes this issue by making use of an API driven optical character recognition (OCR) service. There are certain prerequisites to use this option. First, one needs to enable the OCR plugin by installing the extension under File –> Install Extension.


Caitlin Long, Blockchain’s Ambassador of Hope

Caitlin Long, Blockchain’s Ambassador of Hope: Notes from a RegTech conference keynote
What became obvious is that humans, no matter what industry or background, are inherently lazy. When it comes to understanding what’s going on in Wyoming and the fast-tracked 13 bills that were passed (thanks to the leadership and vision of Caitlin Long from 2018 to 2019), what is realized is that humans are lazy and crave convenience. When it comes to ensuring that as emerging technologies arrive and society evolves—that our policies and laws are “backward compatible”—humans are lazy.  When it comes to educating policymakers, and vice versa, policymakers truly understanding the purpose and potential of new technology, but humans seek convenience. Complacency is not good enough anymore. Laziness is no longer an excuse. All of us have to step up to the plate, engage, work in partnership with policymakers and technologists to thoughtfully craft legislation that 1) keeps consumers, the “main street moms and pops,” safe, 2) keeps the nefarious “bad guys” at bay


Shocked by your cloud provider bills? Here’s what to do about it

One of the most common reasons businesses see their cloud costs spiral out of control is that they moved to the cloud without a plan or strategy. It’s easy to buy and consume cloud, so if you don’t have a strategy governing your company’s cloud use, it’s really easy to buy and consume more than you meant to. If you don’t have a policy for cloud use, put one together now. This will help you manage the number of cloud platforms selected as well as the costs by making sure that using the cloud is an active decision rather than something your organization does by default. ... Because of the cloud’s ability to scale more or less infinitely, the problem for users is a bit like what you’d get in the summer during a heatwave: you blast the A/C for a week because you need the relief, but you’re in for a shock when the bill comes – and what can you do? You already used all that sweet, cool air. With the cloud, you can head this particular problem off by setting limits ahead of time with your cloud provider.


The direction of business intelligence is changing to forward


Data analysis has traditionally been a relatively straightforward process: Input data, generate a report, and analyze the report to glean insight. But business intelligence is changing. Traditional data analysis is looking backward. It's attempting to figure out why something happened. It's not revealing what will happen. The look of the reports has changed significantly from dot-matrix spreadsheets to eye-popping computer graphics, thanks to Tableau, Microsoft Power BI and other data visualization platforms. But they're still reactive. That won't be good enough for the next generation of BI platforms. "In the next three to five years, instead of asking questions of data, the data will start suggesting observations," said Tim Crawford, CIO strategic adviser at AVOA, a consulting firm in Rolling Hills Estates, Calif. "You'll see things that you hadn't even thought to ask in the first place. As AI comes into play, you can expect to see tools that will identify and highlight things you hadn't thought to ask in the first place.


The Importance of QA Testing for Software Development

If you only rely on internal testing by the same people who developed the software, then they may praise their own work and be reluctant to make changes. Having testers come from different backgrounds and cultures adds diversity to the testing. This is particularly important if you plan on launching your software, service or product worldwide. This is also a reason why companies should do layered QA testing at different stages of design or development. Early testing helps prevent costly mistakes and wasteful development for features that users will not want or care about. As the product develops, further testing and documentation help guide the process in the right direction: one that will satisfy market needs and consumers.  Therefore, QA testing is not just done to eliminate bugs in the end, but to make sure the correct procedures are in place. Rather than finding defects, it deals with preventing them throughout the development process.


Will IBM’s acquisition be the end of Red Hat?

Will IBM’s acquisition be the end of Red Hat?
The good news is that this merger of IBM and Red Hat appears to offer each of the companies some significant benefits. IBM makes a strong move into cloud computing, and Red Hat gains a broader international footing. The other good news relates to the pace at which this acquisition occurred. Initially announced on October 28, 2018, it is now more than eight months later. It’s clear that the leadership of each company has not rushed headlong into this new relationship. Both parties to the acquisition appear to be moving ahead with trust and optimism. IBM promises to ensure Red Hat's independence and will allow it to continue to be "Red Hat" both in name and business activity. ... Will this acquisition be the end of Red Hat? That outcome is not impossible, but it seems extremely unlikely. For one thing, both companies stand to gain significantly from the other’s strong points. IBM is likely to be revitalized in ways that allow it to be more successful, and Red Hat is starting from a very strong position. While it’s a huge gamble by some measurements, I think most of us Linux enthusiasts are cautiously optimistic at worst.


Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines

The primary advantage of analyzing videos over mere text analysis, for detecting emotions and sentiment, is the surplus of behavioral cues. Videos provide multimodal data in terms of vocal and visual modalities. The vocal modulations and facial expressions in the visual data, along with text data, provide important cues to better identify true affective states of the opinion holder. Thus, a combination of text and video data helps to create a better emotion and sentiment analysis model. Recently, a number of approaches to multimodal sentiment analysis producing interesting results have been proposed.11,13 However, there are major issues that remain mostly unaddressed in this field, such as the consideration of the context in classification, effect of speaker-inclusive and speaker-exclusive scenario, the impact of each modality across datasets, and generalization ability of a multimodal sentiment classifier. Not tackling these issues has presented difficulties in the effective comparison of different multimodal sentiment analysis methods.



Quote for the day:


"When you accept a leadership role, you take on extra responsibility for your actions toward others." -- Kelley Armstrong


Daily Tech Digest - July 09, 2019

Colocation facilities buck the cloud-data-center trend

CSO > cloud computing / backups / data center / server racks / data transfer
Poole said the average capital expenditure for a stand-alone enterprise data center that is not a part of the corporate campus is $9 million. Companies are increasingly realizing that it makes sense to buy the racks of hardware but place it in someone else’s secure facility that handles the power and cooling. “It’s the same argument for doing cloud computing but at the physical-infrastructure level,” he said. Mike Satter, vice president for OceanTech, a data-center-decommissioning service provider, says enterprises should absolutely outsource data-center construction or go the colo route. Just as there are contractors who specialize in building houses, there are experts who specialize in data-center design, he said. He added that with many data-center closures there is subsequent consolidation. “For every decommissioning we do, that same company is adding to another environment somewhere else. With the new hardware out there now, the servers can do the same work in 20 racks as they did in 80 racks five years ago. That means a reduced footprint and energy cost,” he said.


The Phantom Menace in Unit Testing

This is not a rant about unit testing; unit tests are critically important elements of a robust and healthy software implementation. Instead, it is a cautionary tale about a small class of unit tests that may deceive you by seeming to provide test coverage but failing to do so. I call this class of unit tests phantom tests because they return what are, in fact, correct results but not necessarily because the system-under-test (SUT) is doing the right thing or, indeed, doing anything. In these cases, the SUT “naturally” returns the expected value, so doing (a) the correct thing, (b) something unrelated, or even (c) nothing, would still yield a passing test. If the SUT is doing (b) or (c), then it follows that the test is adding no value. Moreover, I submit that the presence of such tests is often deleterious, making you worse off than not having them because you think you have coverage when you do not. When you then go to make a change to the SUT supposedly covered by that test, and the test still passes, you might blissfully conclude that your change did not introduce any bugs to the code, so you go on your merry way to your next task. 


British Airways facing £183m GDPR fine


The ICO said its investigation found that a variety of information was compromised by poor security arrangements at the company, including log in, payment card, and travel booking details as well name and address information. Information Commissioner Elizabeth Denham said: “People’s personal data is just that – personal. When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.” The ICO said BA has cooperated with the investigation and has made improvements to its security arrangements since the breach came to light. The company now has 28 days to make representations to the ICO about the findings of its investigation and the proposed fine. Willie Walsh, chief executive of BA owners International Airlines Group, has confirmed that the airline will make representations to the ICO, according to Reuters. “We intend to take all appropriate steps to defend the airline’s position vigorously, including making any necessary appeals,” he said.



How artificial intelligence can transform the legal sector


Firms are experimenting with the use of chatbots technology to deliver basic legal advice. DoNotPay has already garnered a lot of attention for allowing users to appeal parking fines and it’s not unreasonable to expect that as the technology becomes more sophisticated, higher quality and more specific advice could be offered by a similar machine learning (ML) tool. As with all new technologies, the ultimate aim is to ensure that firms offer a higher quality and more consistent service. It is important to note that AI is not positioned to outperform the high-end tasks performed by legal professionals, but should rather be seen as a tool designed to support them by carrying out time-consuming research or administrative tasks. That’s why now is the time for forward-thinking firms to begin integrating AI into their legal services as well as their administrative procedures. The use of AI does however give rise to a few practical and ethical considerations that legal teams must be aware of. Many revolve around the sensitive data that firms would be required to store on clients in order to offer an optimal service.


Think like a criminal to beat them at their own game ⁠— Frank Abagnale Jr

“Think like a criminal to beat them at their own game” ⁠— Frank Abagnale Jr image
Crime today, of course, has a significant physical element. However, over the last 20 years there has been a criminal movement towards the digital. Cybersecurity Ventures predicts cybercrime damages will cost the world $6 trillion annually by 2021, up from $3 trillion in 2015. The attack surface area is now different, but “the one thing that never changes is that criminals are all the same,” said Mr Abagnale. “So, if you think like a criminal, it doesn’t matter what they do, you can figure out there motives and means.” Over the course of a 43-year-career in the FBI, Mr Abagnale has worked on every single data breach, including; TJX in 2007, and more recently, the Marriott and Facebook breaches. “The one thing that I’ve learnt is that every breach occurs because somebody in that company did something that they weren’t supposed to do, or somebody in that company failed to do something they were supposed to do,” he said. “It always comes down to the human element,” reiterated Mr Abagnale.


IAM market evolves, but at a cost

"We're in this spot with a lot of technical debt," Daum said, adding that State Auto is a G Suite customer and is in the cloud with AWS, but is hesitant to add on another vendor just for identity management. "We're paying a lot of money to a lot of different companies and we're trying to find a way to see which of those companies can be used for identity services. No offense to Ping Identity or Okta, but why pay them however much money if we can limit the amount of cooks in the kitchen." Emerging capabilities within IAM products intrigued Daum, but never bested ROI. "Where's the value added?" Daum said. "Everyone is talking about cloud and password-less and zero trust. Those buzzwords sound nice, but the cost to implement is still huge." Zero trust is a security architecture introduced by Forrester Research that is designed to assess threats not just from outside the network, but from within it. It uses the principle "never trust, always verify" anything trying to connect to the network to ensure it remains secure.


Don’t wait up for the open cloud

Don’t wait up for the open cloud
Open clouds have been a concept since cloud computing became a thing; the reality is that we’re dealing with public companies that have to return an investment to shareholders. They operate based on gaining profitable advantages and working within their own market microcosms. They court users in their own way, pushing their own cloud services, which leads to having workloads that are not easily transported from cloud to cloud. Indeed, if the objective is “cloud native,” by definition that's going to mean lock-in. A few open cloud standards have been pushed in the past, and currently as well. Although they found traction as private clouds, with some public cloud instances as well, private clouds have declined relative to public clouds, and the public cloud instances shut down. It’s just too hard to keep up with the larger public cloud players and their billion-dollar R&D and marketing budgets. This leads me to a few conclusions about the state of cloud computing now, as well as some projections of where things are likely to go: The notion of interoperable public clouds is not likely to happen unless the user bases demand it and the public cloud providers feel the pinch.


Do and don'ts of navigating data analytics in the cloud

The marketing hype on the cloud positioning it as being an “easy button” can draw you in, but the reality is moving an enterprise data warehouse or another type of analytical environment to the cloud is just like moving one database platform to another – and it comes with the same challenges. You and your team need to be ready to migrate, monitor and test the new environment, and when you are migrating systems that have developed over time, “lifting-and-shifting” does not come without running into technology issues or making functional decisions that impact how a business or application is run. It’s true that with the cloud, you never have to complete low-level administration of your environment such as software updates and server sizing. However, higher-level administration such as database performance, usage analysis, cost management, and security and privacy management will always be a requirement. 


Must-have features in a modern network security architecture

Modern network security must have these features
As the old security adage goes, “the network doesn’t lie.” Since all cyber attacks use network communications as part of their kill chain, security analysts must have access to end-to-end network traffic analysis (NTA) up and down all layers of the OSI stack. The best NTA tools will supplement basic traffic monitoring with detection rules, heuristics, scripting languages, and machine learning that can help analysts detect unknown threats and map malicious activities into the MITRE ATT&CK framework. ... Network security technologies must support granular policies and rules, subject to immediate alteration based upon changes in things such as user location, network configuration, or newly discovered threats/vulnerabilities. Organizations must have the ability to spin up/spin down or change network security services whenever and wherever they are needed. Modern network security controls must be able to accommodate internet of things (IoT) devices and protocols with the same types of strong policies and enforcement as they offer for standard operating systems. Finally, network security architectures must be built around easily accessed APIs for rapid integration.


A Simplified Value Stream Map for Uncovering Waste

There are a number of ways to display waste in a system. The most common approach is probably the use of value stream maps. These are maps that show the journey of a product from raw material to finished goods delivered to customers. They are very helpful in understanding the flow of goods and pinpointing wasteful delays. These don’t always seem relevant to software engineering because the images of factories, trucks, and forklifts don’t apply. Even the versions developed specifically for software sometimes seem to lack the qualities of being simple and definitive. What if we just want to know one thing: for any given process, how much time is spent waiting versus working? This would give us a simplified view of waste for any process and would be helpful in making it more efficient. The details for constructing this are straightforward. Let’s define working as time spent actively creating a product, time for which customers would gladly pay. Let’s define waiting as time spent waiting on something, time for which customers would not want to pay. We use duration (not effort) for both and we maintain consistent time units between them.



Quote for the day:


"The one nearest to the enemy is the real leader." -- Ugandan Proverb


Daily Tech Digest - July 08, 2019

An eco-friendly internet of disposable things is coming

An eco-friendly internet of disposable things is coming
The “internet of disposable things is a new paradigm for the rapid evolution of wireless sensor networks,” says Seokheun Choi, an associate professor at Binghamton University, in an article on the school’s website. “Current IoDTs are mostly powered by expensive and environmentally hazardous batteries,” he says. Those costs can be significant in any kind of large-scale deployment, he says. And furthermore, with exponential growth, the environmental concerns would escalate rapidly. The miniaturized battery that Choi’s team has come up with is uniquely charged through power created by bacteria. It doesn’t have metals and acids in it. And it’s designed specifically to provide energy to sensors and radios in single-use IoT devices. Those could be the kinds of sensors ideal for supply-chain logistics where the container is ultimately going to end up in a landfill, creating a hazard. Another use case is real-time analysis of packaged food, with sensors monitoring temperature and location, preventing spoilage and providing safer food handling.


How to Get Hands-On with Machine Learning

Image: Peshkova - stock.adobe.com
If you really want to understand the capabilities and limitations of machine learning, you have to get hands-on. Here's a short list of options for beginners. Everyone should have a conceptual understanding of machine learning, so they can communicate more effectively with practitioners. To really understand what machine learning can and can't do, you have to get hands-on with it, which is what the curious, the career builders, and the DIY problem-solvers are doing. The starting point differs for individuals based on their education and experience. However, the titles of resources may not necessarily reflect that fact. Following is a short list of resources with a bit of insight into their requirements and value. Deep learning, a subcategory of machine learning, has been omitted intentionally to keep the focus of this article on machine learning in general. Competitions provide an opportunity for anyone to get hands-on with machine learning. Don't let the word "competition" scare you, because you'll find a lot of helpful resources at these sites available free to anyone. Later, if you decide to compete, and if you achieve a prominent position on the leader board, you'll have something more to add to your resume.


What is data science? A method for turning data into value
The business value of data science depends on organizational needs. Data science could help an organization build tools to predict hardware failures, allowing the organization to perform maintenance and prevent unplanned downtime. It could help predict what to put on supermarket shelves, or how popular a product will be based on its attributes. "The biggest value a data science team can have is when they are embedded with business teams. Almost by definition, a novelty-seeking person, someone who really innovates, is going to find value or leakage of value that is not what people otherwise expected," says Ted Dunning, chief application architect at MapR Technologies. "Often they'll surprise the people in the business. The value wasn't where people thought it was at first." ... Data science is generally a team discipline. Data scientists are the forward-looking core of most data science teams, but moving from data to analysis, and then transforming that analysis into production value requires a range of skills and roles. For example, data analysts should be on board to investigate the data before presenting it to the team and to maintain data models.


Network Security and Performance Monitoring: The Basics

Network Security and Performance Monitoring: The Basics
A security attack might be design to either strain or eliminate a network’s resources. Once malware enters the network, it may continuously send requests for data to deliberately use up your available bandwidth. The severity of an attack like this could range from slowing the network down to a full-scale denial-of-service (DoS) attack. Whatever the intent, excessive or harmful data usage will put a huge strain on your network’s performance. Without the right security resources in place, your network will have to work hard to keep up. Another way security threats can overload your network is by installing resource-draining applications and leaving them to use up bandwidth. A network performance monitoring (NPM) solution can sweep for any unwanted software and alert your team to it so you can take steps to remove it from your infrastructure. Security threats can target any hardware on your network’s infrastructure. Malware might try to bring down either devices connected to the network or the network nodes that you have installed.


What is a botnet? When armies of infected IoT devices attack


A botnet is a collection of internet-connected devices that an attacker has compromised. Botnets act as a force multiplier for individual attackers, cyber-criminal groups and nation-states looking to disrupt or break into their targets’ systems. Commonly used in distributed denial of service (DDoS) attacks, botnets can also take advantage of their collective computing power to send large volumes of spam, steal credentials at scale, or spy on people and organizations. Malicious actors build botnets by infecting connected devices with malware and then managing them using a command and control server. Once an attacker has compromised a device on a specific network, all the vulnerable devices on that network are at risk of being infected. A botnet attack can be devastating. In 2016, the Mirai botnet shut down a large portion of the internet, including Twitter, Netflix, CNN and other major sites, as well as major Russian banks and the entire country of Liberia.


Blockchain and the sharing economy


Whether blockchain will ever play a meaningful role in the sharing economy is up for debate. Some skeptics say it's all hype and blockchain's role could be minimal at best. But proponents say there is a natural fit between blockchain and the sharing economy services. The Blockchain Council, a group of blockchain experts and enthusiasts who support research and development of the technology, has noted that businesses such as Uber and Airbnb depend on their users to bring value to their networks. "The problem with this model is that the revenue generated is not fairly shared with all of the members that help generate content," the council said. Blockchains, because of their decentralized nature, allow for smart contracts that can deploy software in a secure and decentralized manner, it said. "Therefore, with Blockchain implemented software, we do not need to rely on massive data centers to run the enormous profit-making platforms," the council said.


Meeting the Challenge of Artificial Intelligence

Meeting the Challenge of Artificial Intelligence
While AI is still an evolving technology, many applications have recently made impressive leaps. For example, computers can defeat chess champions, help drive cars, instruct drones to return automatically, provide medical diagnoses, perform as virtual assistants, and navigate vacuum cleaners through a furnished house. The AIapplications for business involve training computers to do tasks employees can perform, learning from experiences and adjusting to new data, if needed. Currently, CPA firms can use intelligent robots to count inventories, inspect fixed assets, handle bank audit confirmations, and read contracts or other documents to generate meaningful insights. Some CPAs may assume that, like big data or blockchain, AI is a relatively recent development. To the contrary, AI research started even before the creation of the Accounting Principles Board, FASB’s predecessor, in 1959. Subsequently, several subfields of AI have emerged, including robotics, perception (vision and speech), machine learning (ML), and expert systems (ES).


Wipe Away the Threat of Wiper Attacks

Wipe Away the Threat of Wiper Attacks
As with ransomware, resisting wiper malware requires putting defenses in place before attackers come calling. "It is time for strong authentication, least privilege access control - or at least 'read-only' or 'execute-only' - and end-to-end application layer encryption," Murray says. But that's just the start. "You can add privileged access management - PAM - and safe backup with fast recovery to those three measures," Murray tells me. Don't stop there. "We need greatly improved proactive threat detection," Murray says. "We need out-of-band confirmations and alerts for all transactions, many data changes, and some users. We need document management systems for intellectual property." "In addition to implementing multi-factor authentication, make sure that legacy protocols that don't support MFA are either disabled or tightly restricted," Lee Neely, a veteran security professional at Lawrence Livermore National Laboratory, says in a recent SANS Newsletter. "Additionally awareness reminders, including spam/phishing reporting processes, would be timely." Unfortunately, many organizations don't have many of these essential defenses in place.



The role of blockchain in information governance


Today, businesses are finding success with a federated content approach using content services. They are building applications that call content services without consumers' knowledge of which underlying system the content resides. Within an organization, it is possible to access the content across different content services platforms (CSPs) using a single set of content services. However, content exists outside the control of a business. For instance, government embassies, immigration agencies and law enforcement organizations act upon visa applications at different stages of a visa process. Each organization has its own system for tracking visa applications, and each needs access to the latest information at all times. Similar multi-organization interactions occur in healthcare as patients move between different providers. Each provider adds content to the patient's record creating a complete medical history. The need for instant access to the latest content can have a life-or-death impact.


Agile Planning: Always Plan for a Product, Not a Project!

Planning checklist
Agile estimation is done by evaluating the amount, complexity, risk, duration, and business value. There are many agile estimation techniques that are in practice, which include T-shirt sizing, Planning Poker, The Bucket System, Fist to Five, Dot voting, Affinity mapping, etc. Velocity is the amount of work done by the team in a given time, and in Agile we sum up the story points that were completed in that sprint to determine the velocity. This is a kind of measure of the productivity of the team; it depends on various factors, but it best works with stable and experienced teams. So, while working with new teams, the product owners and scrum masters should be liberal and let the team stabilize. Relative Estimation is a kind of estimation not by units of time but how items are like each other in terms of complexity. Instead of estimating each user story separately, we estimate by comparing or grouping items of similar difficulty. For example, feature B might be "twice as complex" as feature A, which you have already completed. So, if feature A took two weeks, we can guess that feature B would take four weeks to complete.



Quote for the day:


"A true leader always keeps an element of surprise up his sleeve which others cannot grasp but which keeps his public excited and breathless." -- Charles de Gaulle


Daily Tech Digest - July 07, 2019

Anti-fraud analytics must be about prevention, not detection

The real challenge for businesses, he said, is not just building the biggest analytics platform possible – but embedding those analytics into operational processes, helping prevent fraud and cybercrime rather than detecting it after the fact. It’s a big difference that can make all the difference for financial-services institutions that have been pummelled by data breaches and struggled to maintain consumer confidence in their fraud and data-privacy protections. A recent Unisys survey found that Australians are by far the least trusting of their banks’ data protections – a perception that is hardly helped by incidents such as the recent exploitation of the Westpac PayID payment service. Indeed, true to expectations, the increasing pace of financial-services transactions – for example, through Australia’s New Payments Platform (NPP), on which PayID was built – had had a flow-on effect in terms of the fraud it facilitates. This trend towards real-time transactions, Henderson said, has ratcheted up the urgency for every company to understand the vulnerabilities in their payments processes and intelligently apply targeted machine learning-driven analytics to prevent fraud – not just detect it after the fact.


New Wearable Voice Recognition Sensor Cuts out Ambient Noise Interference

Shure SM58 Microphone Stage Close-Up Voice Recognition Wearable Sensors Cancel Noise Science Audio Research Sound Engineering
A pair of researchers from Pohang University of Science & Technology came up with a technology that’s superior to current voice recognition options, and it could lead to more accuracy even when people use voice recognition in potentially noisy areas, like train stations or shopping malls. If a person puts their hand against their throat while speaking, it’s easy to feel the vibrations associated with the voice. The researchers took that into account while developing their voice recognition sensor. It’s a wearable device that recognizes a person’s voice according to how their neck skin vibrates. That approach means things like ambient noise or the volume of a person’s speech do not risk making it harder to decipher. The scientists determined sound pressure is proportional to the acceleration of the neck skin’s vibration at certain sound levels, and that they could use that knowledge to create a sensor that qualitatively measured the voice. They made a device comprised of a slim polymer film, plus a diaphragm featuring tiny holes.



It was a really bad month for the internet

internet heartbeat
On June 24, Cloudflare dropped 15% of its global traffic during an hours-long outage because of a network route leak. The networking giant quickly blamed Verizon (TechCrunch’s parent company) for the fustercluck. Because of inherent flaws in the border gateway protocol — which manages how internet traffic is routed on the internet — Verizon effectively routed an “entire freeway down a neighborhood street,” said Cloudflare in its post-mortem blog post. “This should never have happened because Verizon should never have forwarded those routes to the rest of the Internet.” Amazon, Linode and other major companies reliant on Cloudflare’s infrastructure also ground to a halt. A week later, on July 2, Cloudflare was hit by a second outage — this time caused by an internal code push that went badly. In a blog post, Cloudflare’s chief technology officer John Graham-Cumming blamed the half-hour outage on a rogue bit of “regex” code in its web firewall, designed to prevent its customer sites from getting hit by JavaScript-based attacks.


How Is AI Driving Software Testing?

AI is also capable of test optimisation itself. This can help find which tests are most efficient and accurate, and reduces the production of redundant test cases. Software testing can be an expensive process, so ensuring the right tests are carried out is a key element, which can be speeded up with the help of artificial intelligence. Tests for the impact of changes on business via customers is also a key way that AI is being utilised. This will help identify any issues caused to the customer by updates and new releases, and allow quick changes before customers are driven away from the software or awaiting customer feedback. It can even be predicted ahead of a release as to whether customer satisfaction will go up or down with a new release of the software. This gives you the opportunity to fine tune software ahead of a new release, to ensure customers are retained. This may be how AI is shaping software testing right now, but there is plenty more to come. More complex methodology which tests interconnected tech, like the Internet of Things devices, will be transformed by AI, that will also factor in whether the end user believes that the result is correct.


Encryption laws are creating an exodus of data from Australia: Vault


"As multinational companies move physical, operational, and legal jurisdiction offshore, they easily side step the AA Act -- in effect thwarting the AA Act," Vault said. "Current legislation does not prevent these companies continuing to provide services to Australia citizens, companies or government. In effect, these companies are eluding the law and attaining revenue while every day Australian citizens are suffering the consequences." A submission by the Australian Civil Society Coalition -- consisting of Digital Rights Watch, Blueprint for Free Speech, Human Rights Law Centre, NSW Council for Civil Liberties, Queensland Council for Civil Liberties, Liberty Victoria, Access Now, Electronic Frontiers Australia, and Future Wise -- reiterated prior calls for the laws to be entirely repealed. The coalition called for an "enforceable federal human rights framework" to prevent Australia being the weakest link in the Five Eyes network, as well as for protection for whistleblowers in relation to the encryption laws, and the use of warrants and judicial content for notices issued.


Size Does Matter: Tackling SMB Cybersecurity Concerns


One of the main issues is businesses’ perception of cybersecurity. Many view the whole premise as a business drain: a tick-box exercise or time and money spent on something that’s difficult to show ROI. Cybersecurity was cited as just one consideration as businesses become more established and grow. SMB leaders say that their biggest priorities are attracting new customers (36%) and increasing business growth and profitability (29%). In fact, only 35% perceive cybersecurity as a significant threat. However, these cyber0threats prove to take up a significant amount of SMBs’ time: where leaders admit they spend almost a day per week (or 18% of their time) on cybersecurity-related tasks. Ultimately, customer relationships and contracts are on the line, but few businesses focus on effective cybersecurity education for employees. Rather than detracting from growth, cybersecurity investment can be viewed as a facilitator and differentiator for SMBs over the long term.


New digital banks challenge HSBC’s Hong Kong dominance


“We’re not ignoring the fact that we’ve got eight new competitors,” said Mr Martin, who noted the high calibre of the investors in the new virtual banks. “But our reaction is, we know what we need to do and we’re doing it already.” Some of HSBC’s established rivals have decided to launch their own digital lenders: StanChart has teamed up with Hong Kong Telecom and online travel agent Ctrip; Bank of China is working with the Jardine conglomerate and an offshoot of China’s JD.com ecommerce group. StanChart also generates a significant chunk of its profits from Hong Kong retail banking — making $740m of pre-tax profit in the business last year, or 30 per cent of the total for the bank. However, its customer base is dominated by older wealthy customers, meaning the virtual bank could pick up new customers without cannibalising too much of its existing business. Samir Subberwal, head of retail banking in China and Asia for StanChart, pointed out that its partners Ctrip and Hong Kong Telecom have about 5m customers to whom it can market the new venture.


Threat vs. Challenge

Think about it: threats are inherently a bad thing. They cause harm, provide little room for growth or learning, and rarely provide any opportunity or feedback beyond surviving through the ordeal. None of that helps you feel confident about preparing or allows you to look forward to the event. Instead, it fills you with a sense of dread, apprehension, and anxiety - none of which are facilitative emotions. So, what do we do about it and how can we get back to maximizing our impact? It comes down to the Performance Mindset skills of reframing and perspective. When you start to dive deeper, you realize the problem isn't the importance of the event; the problem is how you view the event. By reframing the way you view your upcoming event, you can start to see it as a challenge instead of a threat. What’s the difference between a threat and a challenge? Everything. Where a threat is inherently bad, challenges are usually viewed as good or fun. Where threats provide little room for growth or learning, challenges, by nature, drive growth.


The Important Difference Between Virtual Reality And Mixed Reality

The Important Difference Between Virtual Reality And Mixed Reality
Mixed reality is the latest immersive technology and as a result, there aren’t as many publicised use cases of it compared to virtual reality. However, in its latest iteration, HoloLens 2 is an untethered device that Microsoft hopes will have many business applications to help people across an organisation communicate, collaborate and learn together. Ford is using mixed reality technology for business purposes. Is uses the HoloLens to prototype vehicles in a virtual environment to skip over making prototypes in a physical environment which is the conventional production method.   Mixed reality takes a lot more processing power than either virtual or augmented reality and it relies on an MR headset that offers either a holographic experience through translucent glasses or an immersive experience. While less immersive than a virtual reality experience, mixed reality pulls from virtual and augmented realities to join virtual and real worlds to create an extremely believable and effective interaction. Users can interact with the objects thanks to either gesture/gaze/voice recognition technology through a headset or with a pair of motion controllers.



The Biggest Cybersecurity Crises Of  2019 So Far

One of the most concerning corporate data breaches so far this year is that of the American Medical Collection Agency, a massive healthcare-related debt collector. The company discovered that it had been breached in March, and filings with the US Securities and Exchange Commission indicate that the intrusion on AMCA's systems lasted from August 2018 through March 2019. The incident was first publicly reported at the beginning of June after the medical testing firm LabCorp said that 7.7 million of its customers had data exposed because of AMCA, and Quest Diagnostics said it had had records from 12 million patients exposed. AMCA said that the compromised information included first and last names, dates of birth, phone numbers, addresses, dates of medical services, healthcare providers, and data on balances due. The stolen information did not include insurance ID numbers or Social Security numbers. Because AMCA contracted with so many companies, it's possible that additional organizations—and therefore other patients—were affected as well.



Quote for the day:


"It is one thing to rouse the passion of a people, and quite another to lead them." -- Ron Suskind


Daily Tech Digest - July 06, 2019


There is still a long way to go before PR achieves artificial intelligence nirvana, Graham explains. Talking about the Alexa/ Google home devices and autonomous cars, she continues: “These are employing machine learning to improve performance by analyzing and incorporating the data that they receive, but they aren’t exactly flawless and they employ many thousands of data scientists (along with millions of every day users) to inform and refine the technology.” In today’s competitive market, communicators must continue investing in AI. However, Rausch advises organizations to stay safe by remaining operational without depending on artificial intelligence promises and by taking advantage of how technology has proven itself to empower the PR process. Computers can act very decisively within seconds when they find evidence that something is happening. This shouldn’t mean that we can fully trust automated insights. “Hesitation is a very human thing to do. Computers don’t hesitate…they are absolutely literal,” explains Rausch.



Beyond Limits: Rethinking the next generation of AI

A human profile containing digital wireframe of technology connections.
Beyond Limits evolved out of work with NASA's Jet Propulsion Laboratory (JPL) for remote rovers used to explore places like the moon and Mars. Due to the communications lag in space, real-time control is virtually impossible. Any AI solution must be not only fully autonomous, it must be able to train and, ideally, correct itself. When there is a problem it can’t correct, the bandwidth limitations for communication make full reprograming problematic…but point patches are certainly possible. This resulted in an AI platform uniquely able to be updated, modified and, to a certain and initially limited extent, able to both teach itself and make corrections while disconnected. This unusual requirement likely has made the resulting AI nearly ideal for areas where the AI must often act independent of oversight – and/or in areas where problems can escalate very rapidly – and the AI must be able to both deal with a diversity of known and unknown issues.


Transforming Organisations, Changing The Nature Of Work

Transforming organisation
In the Fourth Industrial Revolution the biggest challenge for every business will be ‘speed to capability’ In other words, how quickly can a company retool itself, both in terms of technology and skills, in order to perceive, analyse, understand, and respond to continuously changing customer behaviour and expectations? Cloud technologies can provide retooling agility, but that is not enough. Companies will need to reorganise work in order to obtain human agility as well. They need to be able to access and deploy a wide range of skills quickly and on demand. This means that we must forget the concept of a ‘job’. This concept is a relic of the First Industrial Revolution where stability was critical for business success, and people were deployed in stable organisational units. ... Human workers will be defined by their skills and not by job titles. In such a world, leadership needs to radically change too. Instead of a supervisory role that ensures processes are dutifully followed by all, the new leaders should be more like orchestra conductors: bringing together diverse talent and technology into a coherent whole that can deliver an excellent performance whatever score you put in front of them.


Experts Discuss Data Science and Machine Learning Best Practices

Surviving and thriving with data science and machine learning means not only having the right platforms, tools and skills, but identifying use cases and implementing processes that can deliver repeatable, scalable business value. The challenges are numerous, from selecting data sets and data platforms, to architecting and optimizing data pipelines, and model training and deployment. In response, new solutions have emerged to deliver key capabilities in areas including visualization, self-service and real-time analytics. Along with the rise of DataOps, greater collaboration and automation have been identified as key success factors. DBTA recently held a webinar with Bethann Noble, director of product marketing, machine learning, Cloudera; Gaurav Deshpande, VP of marketing, TigerGraph; and Will Davis, senior director of product marketing, Trifacta, who discussed new technologies and strategies for expanding data science and machine learning capabilities.


Visa says payment industry can move away from using passwords


For security-minded individuals, mobile device manufacturers have addressed concerns about stolen biometric information by storing and encrypting biometric templates — algorithmic representations instead of actual biometric attributes — locally on consumer-owned devices instead of the cloud. This ensures an individual is always in possession of their personal biometric data with the option to delete the data at any time. In addition, authentication accuracy is bolstered by liveness detection used by biometric scanners and software that can identify if a fingerprint is copied or a facial scan is of a mask. It’s been roughly six years since fingerprint sensors were integrated into consumer smartphones and in this short amount of time, consumers have grown increasingly comfortable with the approach. The need for quick and easy authentication will only increase with the growth of digital products and services, and remembering unique passwords for every internet-connected device or app is untenable.


How to Effectively Lead Remote IT teams

The best measure of team collaboration is how many times team members interact. If you are not at the same office, make sure you design special meetings for interactions. Encourage people to do it even online. For example, tell George to call Stefan, because he has something interesting to share! Or ask your remote team to go together for lunch and give them a topic to talk about. It is very inspiring if the topic is not about work, rather than something of a higher value, such as how can we stop poverty or why did we end up with the situation in Syria. The last one goes to live interaction and having shared fun experiences. If you doubt the investment of getting people at the same place, I am personally investing in inviting our partners in joining us during our team building events. Each year we celebrate our company anniversary at the seaside – this year it will be our 13 th and we will celebrate at one of the best Black Sea resorts – if you are one of our partners just reach Burgas on 13 July – the rest is on us!


Five Things to Understand About Digital Transformation

Five Things to Understand About Digital Transformation
Today, leaders are talking about 3D printing, Internet of Things (IoT), Robotics and similar advancements in digital technology, which can drastically impact organizations and industries. Many leaders, however, are missing a few key points, resulting in a failure to leverage the power of digital in their business, and becoming irrelevant instead: Digitalization is not just a technology trend. It is an overarching business transformation driven by a shift in the organizational mindset;  Digitalization is not characterized by creating mobile apps and having a social media presence. It is an entirely new approach to business. Digitalization is not the same as digitization” - Digitization - the process of converting the physical and analogue into something that’s virtual and digital - Digitalization - leveraging technology to create an exceptional customer experience, become agile and unlock new value. We are in the Era of the Digital BLUR. Organizations leveraging the power of digital are playing by very different rules and are attacking the incumbents from practically every industry. How?


Scrum Team: What Is Your Inner Compass?

Where's your team's compass?
If you can create a vision for your Scrum team, and you find them all aligned in the right direction, your work has a reasonably good chance at success. Have you ever been part of a Scrum team that struggled hard making any progress? Team members had plenty of talent, required resources, and opportunities, but they just couldn't progress enough and create impact.  If the above plot sounds familiar to you, there's a strong possibility that you might find reading this article valuable. Great vision precedes success, and a compelling vision provides the right direction to the team. If you are unaware of the team's vision, you can't act with conviction. If you haven't inspected the vision in light of your purpose, you can't even be sure that the team you are on is the appropriate one for you. If the team members have an agenda of working against each other, the team's spirit and drive gets lost. Conversely, a team that embraces a vision is more focused, energized, and committed. It knows the reason for its existence. So, how do you inspect a team vision? How do you know whether it is worthy and compelling enough to drive people?


Automated Peril: Researchers Hack 'Smart Home' Hubs

Automated Peril: Researchers Hack 'Smart Home' Hubs
They managed to retrieve a hardcoded SSH private key (CVE-2019-9560) in the controllers. By removing and then imaging an SD card from the controller, they were able to extract the private key, which was needed for root access. The private key was stored in a password-protected folder called /etc/dropbear and named dropbear_rsa_host_key. But they were still able to extract it despite it being password-protected. That SSH key isn't unique, either, so it could be reused across other controllers. In a short video, they show how it is possible to use the vulnerabilities to unlock a Yale lock linked to the controller. The researchers also discovered a local API authentication problem. They found a SHA1 password hash, and because the controller uses the pass-the-hash method rather than requiring the credentials to be input, they were able to construct a working authentication request. After that, they say it would be possible to send an authenticated request to unlock a lock.


What CIOs and CTOs Can Learn From Smart Cities

Image: Sergey Nivens - stock.adobe.com
"One of the hard things is determining the problems you may not know about. One of those things is one-way streets," said Sherwood. "We've been working with NTT to count the number of drivers, and based on historic data and analytics we can start predicting when we might have a wrong way driver. We're able to count the number of vehicles to determine whether we need to invest more in signage. change the road layout, [or otherwise] solve the problem." Las Vegas is also using video analytics to improve traffic flow through dynamic signal timing. It's also counting pedestrian traffic and monitoring environmental factors, all for the purpose of better decision-making. ... "We have a variety of projects we're working on that focus on six key areas: public safety, education for workforce development and to help our population prepare for the future, economic development, health and wellness, social aspirations to try to close the digital divide, and mobility which focuses on how we move people around the city more efficiently."




Quote for the day:


"Leaders are people who believe so passionately that they can seduce other people into sharing their dream." -- Warren G. Bennis


Daily Tech Digest - July 05, 2019

Are Programming Languages Key to the Evolution of Machine Learning?

Person coding on a computer.png
We are at the stage where the data, compute and deep learning algorithms that are absolutely necessary to make AI a reality have all become abundant,” Tutuk said. “But just like the early days of computer technology, the use of state-of-the-art AI is locked out of the reach of millions of developers. At present, even the most popular deep learning frameworks all require a great level of expertise.” There was a similar trend in the early days of developing computer software, she said, and programming languages like Fortran, C, and C++ were easier to use than assembly languages but were still largely inaccessible to most. It was the development of high-level programming languages like Java, Python, and PHP that made computer programming much more widely accessible, around the world. “Without these high-level abstractions, the digital world as we know it today will not exist,” Tutuk said.



These are the top skill sets for a successful blockchain team

certification education knowledge learning silhouette with graduation cap with abstract technology
Those entering the blockchain development/engineering field should have the mentality of a hacker - or the ability to problem solve collaboratively in a workshop setting when a client presents a business problem. They need to be able to think through the business objectives, implications and value "for each of the participants and then [define] the architecture and overall solution flow," KPMG said. "It is this collaborative approach that leads to a successful application of blockchain." Given the lack of coursework around blockchain and its relatively new existence in the enterprise, a team must be open to exploring and experimenting by "hacking the problem" from a business and IT perspective, according to KPMG. "I'd say at KPMG we've been very successful at taking [employee] skills in-house and upscaling them to deliver blockchain skills," Keele said. "Until universities start printing blockchain degrees, that will be the pattern that will continue."


Security and privacy key to smart buildings and cities


One of the biggest challenges is the huge number and variety of stakeholders who all have a role to play and need to work in collaboration. These include building owners, property developers, landlords, building occupants, architects, technology suppliers, building services engineers, town planners, chief security officers, chief information security officers, data protection officers and more. At the core of the security problem is the fact that many of the systems that smart building and cities will need to rely on will be linked to a wide variety of internet of things-connected (IoT) devices and sensors that are potentially vulnerable to cyber attacks. The whitepaper underlines the importance of considering and evaluating cyber security throughout the whole supply chain to protect data, maintain privacy and keep risk associated with cyber threats to a minimum. According to the whitepaper, this process should always start by looking at device security and the supplier’s cyber maturity.


How Developers Can Learn the Language of Business Stakeholders

Business stakeholders are not your enemies; once they have enough sound information on where we are and what we expect to happen shortly, they are willing to accept reasonable requests or decisions - even additional learning time which team members may require.  What you can do is approach the opportunities for learning using the learning curve effect. Although this is something we should not avoid, I often see this element being skipped over when planning a project or forecasting work. We tend to translate the metrics and statistics from the stable state to the initial phases, when the team is still formulating. Similarly, this applies to the new person in a role, who needs to learn not only her place in the project, but very often new responsibilities. Let’s keep this in mind when planning work or identifying impediments. You can read more on the learning curve effect in a separate article I wrote on the topic, Never stop learning – why is learning curve effect so powerful?


Enterprise architect role is more about business than ever


"In the past that's been somewhat separated," Nelson said. "There might have been dedicated business architects running around that live on the business side that may or may not interact with EA." He said that in a similar vein, CIOs are now frequently called to the overall business strategy table, and that trend is dragging all of IT -- particularly enterprise architects -- in the same direction. But these organizations need more than just a general IT liaison. Now that businesses put so much value on their digital strategy, they need constant input from architects that possess an intimate understanding of their software capabilities and can shape development practices to meet specific business needs. Aslinn Merriman, emerging technology architect at Sargento, Inc., a large food production company based in Plymouth, Wisconsin, agreed that the architect's purpose is to help set a strategy and facilitate development goals that align with other business units and the overall organization.


US Cyber Command Warns of Outlook Vulnerability Exploits

While the warning from Cyber Command did not offer many details, some security researchers, including analysts with Chronicle - the cybersecurity arm of Alphabet - suspect that this latest attack is related to the activity of an advanced persistent threat group known as APT33, which also goes by the name Shamoon. In research that FireEye published in 2017, analysts found that APT33 has possible ties to Iranian intelligence and has previously targeted aerospace and energy firms in the Middle East. Over the last two weeks, the U.S. Department of Homeland Security's Cybersecurity and Infrastructure Agency has warned about an increase in Iranian espionage and cyber activity, including increasing use of so-called "wiper" attacks that render computers unusable. One the largest wiper attacks ever recorded targeted the oil giant Saudi Aramco in 2012. In that case, the attackers used malware also called Shamoon, which has appeared in other attacks over the course of the last several years



Facebook open-sources DLRM, a deep learning recommendation model  

A Facebook like button is pictured at the Facebook's France headquarters in Paris, France, November 27, 2017.
Facebook AI Research (FAIR) open-sources a lot of its work, but its parent company is making DLRM available for free to help the wider AI community address challenges presented by recommendation engines, like a need for neural networks to associate categorical data with certain higher-level attributes. “Although recommendation and personalization systems still drive much practical success of deep learning within industry today, these networks continue to receive little attention in the academic community,” the paper reads. “By providing a detailed description of a state-of-the-art recommendation system and its open-source implementation, we hope to draw attention to the unique challenges that this class of networks present in an accessible way for the purpose of further algorithmic experimentation, modeling, system co-design, and benchmarking.” The makers of DLRM suggest the model be used for benchmarking the speed and accuracy performance of recommendation engines. The DLRM benchmark for experimentation and performance evaluation is written in Python and supports random and synthetic inputs.


Google debuts Deep Learning Containers in beta

deep-learning-containers
The service, called Deep Learning Containers, can be run both in the cloud or on-premises. It consists of numerous performance-optimized Docker containers that come packaged with various tools necessary to run deep learning algorithms. Those tools include preconfigured Jupyter Notebooks, which are interactive tools used to work with and share code, equations, visualizations and text, and Google Kubernetes Engine clusters, which are used to orchestrate multiple container deployments. The service also provides machine learning acceleration capabilities with Nvidia Corp.’s graphics processing units and Intel Corp.’s central processing units. Nvidia’s CUDA, cuDNN and NCCL machine learning libraries are also thrown in. In a blog post Wednesday, Google software engineer Mike Cheng explained that Deep Learning Containers are designed to provide all of the necessary dependencies needed to get applications up and running in the fastest possible time. The service also integrates with various Google Cloud services, such as BigQuery for analytics, Cloud DataProc for Apache Hadoop and Apache Spark, and Cloud Dataflow for batch processing and streaming data using Apache Beam.


Implementing IoT – overcoming barriers to commercial adoption


The basic architecture of IoT comprises four domains: the sensors, the connectivity of those sensors, the data hub that enables the data from all sorts of sensors to be interoperable (rather than stuck in silos), and the applications. The data hub plays a vital role in presenting the data to the applications in a uniform way, and Davies highlighted the work being done at CityVerve, a smart city demonstrator in Manchester encompassing a smart cycle light trial to understand cycle usage and improve cycle routes, an air quality trail which is linked to traffic density, and a water usage trial for leak management and demand management. Edge computing will play an important role in reducing connectivity demands, and zero-touch device management will be essential. Stuart Higgins, head of smart cities and IoT at Cisco, talked about some of the IoT trials and commercial deployments in the UK and worldwide. Many companies are digitising – seeing their operations and products as data to be managed in an IoT context.


3 serverless development strategies for stateful applications

Functions should be directly accessible to each other. Without immediate connections, functions depend on a slow storage medium to transport data from one function to another, building up latency. In real-time application scenarios -- such as 24/7 monitoring systems -- latency is unacceptable. Serverless functions predominantly underpin short-term workloads, which means that resources are allocated to them when requested and taken away once the request ends. Stateful applications developed on serverless functions can't use traditional mechanisms to work, such as global variables that can hold data throughout the application's lifetime. It's impossible for stateless functions to read from and write to disk, and the application can't maintain a constant connection to the database. To create stateful applications, serverless developers can manage application state with database connections, an event payload or backend as a service (BaaS) to integrate with the application.



Quote for the day:


"A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves." -- Laotzu