Daily Tech Digest - October 19, 2019

Lip-Reading Drones, Emotion-Detecting Cameras: How AI Is Changing The World


Specific lip-reading programs can decipher what people are saying from a distance while gait-analysis software can identify an individual just by the way they walk. "Even if the drone is at 300ft, it can still operate effectively,” Dronestream CEO Harry Howe said. While these particular drones are still in the testing phase, many intruding technologies are being used around the country. Take China, for example. It's Skynet system claims it can scan all 1.3 billion citizens within seconds. There are 200 million cameras scattered around the country which can track identity thieves, find fugitives, catch sleeping students and spot jaywalkers. This particular surveillance system led to 2000 arrests from 2016 to 2018. Countries like Malaysia, Jamaica, Germany and Poland are considering installing similar systems, while a number of facial recognition trials have been conducted right here on Australian soil.



7 mistakes that ISO 27001 auditors make

Checklists are a great way of quickly assessing whether a list of requirements are met, but what they offer in convenience they lack in in-depth analysis. Organisations are liable to see that a requirement has been ticked off and assume that it’s ‘mission accomplished’. However, there may still be room to improve your practices, and it might even be the case that your activities aren’t necessary. A good auditor will use the checklist as a summary at the beginning or end of their audit, with a more detailed assessment in their report, or they’ll use a non-binary system that doesn’t restrict them to stating that a requirement either has or hasn’t been met. ...  In theory, they are a perfect fit. You already have a working relationship and you’ll save time finding a consultant and bringing them up to speed on your organisation’s needs. Unfortunately, there’s clearly a conflict of interest in this relationship, as you run the risk of allowing the auditor to manipulate their findings to persuade you to use them as a consultant.


Looking at the Enterprise Architecture Renaissance

Looking-at-the-enterprise-architecture-renaissance
In their enterprise architecture report, Ovum looked at the paradigm shift going on now that’s responsible for transforming EA into architect everything. They reviewed seven EA solutions that have begun the transition from EA to AE. Interestingly, Ovum found that the vendors shared a similar idea on the direction that EA should move toward. Most regarded non-EA features that help with business modeling, business process mapping and analysis, GRC, and portfolio management to be standard features that EA platforms should include in their solutions. ... Today’s enterprise architecture approach needs to promote stronger collaboration and teamwork throughout the organization, so that everyone is on the same page with regard to company goals and desired outcomes. One example on an EA platform that does this effectively is Planview Enterprise One. Planview Enterprise One comes with collaboration and workflow tools that enable process and project-driven work. Elements like Kanban boards and collaborative workspaces make it easy to bring stakeholders and contributors together under one roof, where they can share information and work together to push the company forward.


Top 6 email security best practices to protect against phishing attacks ...


Complicated email flows can introduce moving parts that are difficult to sustain. As an example, complex mail-routing flows to enable protections for internal email configurations can cause compliance and security challenges. Products that require unnecessary configuration bypasses to work can also cause security gaps. As an example, configurations that are put in place to guarantee delivery of certain type of emails (eg: simulation emails), are often poorly crafted and exploited by attackers. Solutions that protect emails (external and internal emails) and offer value without needing complicated configurations or emails flows are a great benefit to organizations. In addition, look for solutions that offer easy ways to bridge the gap between the security teams and the messaging teams. Messaging teams, motivated by the desire to guarantee mail delivery, might create overly permissive bypass rules that impact security. The sooner these issues are caught the better for overall security. Solutions that offer insights to the security teams when this happens can greatly reduce the time taken to rectify such flaws thereby reducing the chances of a costly breach


How operators can make 5G pay


Some operators have started to partner with over-the-top (OTT) service providers to bundle their offerings with connectivity subscriptions, sometimes with an explicit charge and sometimes without (for example, by making certain streams unmetered against the customer’s data bundle). “With the improvements in network capabilities in the 5G era, customers can expect to enjoy more network services bundled with content provider services — including accelerated gaming — and the operator could offer its network service to the customer as part of that bundle,” said a senior executive at an Asian Internet player. In the 5G world, in which the network technology allows a far greater range of functionality that can be monetized, telecom companies have many more opportunities to develop collaborations with a variety of businesses and public agencies. We see four main options for how operators could monetize this greater functionality. The higher the relevance of the telecom operator’s brand to the use case, the greater the operator’s ability to own the customer relationship and claim a bigger share of revenues.


Beyond their value in ensuring consistent, predictable service delivery, SLOs are a powerful weapon to wield against micromanagers, meddlers, and feature-hungry PMs. That is why it’s so important to get everyone on board and signed off on your SLO. When they sign off on it, they own it too. They agree that your first responsibility is to hold the service to a certain bar of quality. If your service has deteriorated in reliability and availability, they also agree it is your top priority to restore it to good health. Ensuring adequate service performance requires a set of skills that people and teams need to continuously develop over time, namely: measuring the quality of our users’ experience, understanding production health with observability, sharing expertise, keeping a blameless environment for incident resolution and post-mortems, and addressing structural problems that pose a risk to service performance. They require a focus on production excellence, and a (time) budget for the team to acquire the necessary skills. The good news is that this investment is now justified by the SLOs that management agreed to.


How open source software is turbocharging digital transformation


Make no mistake, the ever-expanding palette of vendor solutions on the market today remains an indispensable resource for enterprise-scale digital transformation. But there are compelling reasons to explore OSS’s possibilities as well. For example, OSS in emerging technology domains often includes work contributed by highly creative developers with hard-to-find expertise. By exploring OSS projects for artificial intelligence (AI), blockchain, or other trending technologies, companies that lack in-house experts can better understand what the future holds for these disruptive tools. Moreover, CIOs are realizing that when coders can engage with domain experts and contribute their own work to an OSS ecosystem, job satisfaction and creativity often grow, along with engineering discipline, product quality, and efficiency. As any software engineer knows, the ability to take established and tested code from an existing library, rather than having to create it from scratch, can shrink development timelines significantly. These findings spotlight OSS’s formidable promise. But they also make clear that open source is not an all-or-nothing proposition. IT leaders should think of OSS as a potentially valuable complement to their broader ecosystem, vendor, or partner strategy.


Yubico security keys can now be used to log into Windows computers


Starting today, users can use hardware security keys manufactured by Swedish company Yubico to log into a local Windows OS account. After more than six months of testing, the company released today the first stable version of the Yubico Login for Windows application. Once installed on a Windows computer, the application will allow users to configure a Yubico security key (known as YubyKey) to secure local Windows OS accounts. The Yubico key will not replace the Windows account password but will work as a second authentication factor. Users will have to enter their password, and then plug in a Yubico key into a USB port to finish the login process. Yubico hopes the keys will be used to secure high-value computers storing sensitive data that are used in the field, away from secured networks. Such devices are often susceptible to theft or getting lost. If the devices are not encrypted, attackers have various ways at their disposal to bypass normal Windows password-based authentication. Securing local Windows accounts with a YubiKey makes it nearly impossible for an attacker to access the account, even if they know the password.


The Fallacy of Telco Cloud

First, proving the viability of virtualizing Telco workloads, with the investment in defining Network Function Virtualization (NFV) and a global set of trials, beginning in and around the first ETSI NFV working group meeting in 2012. Then, we focused on the optimization of that virtualization technology – investment in Virtual Infrastructure Managers (VIMs), I/O acceleration technologies like Data Plane Development Kit (DPDK), and para-virtualization technologies, such as Single Root Input/Output Virtualization (SR-IOV) for performance and manageability of SLA-backed network functions. Now, we’ve embarked on the next set of technology advancements: separating control and user planes, accelerating I/O functions with FPGAs and SmartNICs, and starting the migration of applications towards containers and cloud native functions. This is the beginning of a second wave of technology-led investments into the Telco Cloud. ... In short, the technology is mature. The real question is – are we actually achieving the benefits of cloud in the Telco network? 


Challenges of Data Governance in a Multi-Cloud World


The traditional contracts that worked in typical telecom network services to mitigate security breaches or other types of noncompliance events have failed to deliver the goods for the cloud. Highly scaled, shared, and automated IT platforms, such as the cloud can hide the geographic location of data — both from the customer and the service provider’s sides. This can give rise to regulatory violations. Thus, contracting for the cloud is still in its infancy, and till some litigation sheds light on regulatory issues and serves to set precedents for future cases, the data-cloud breach issues will remain unresolved. Moreover, data aggregation will increase the potential data risk as more valuable data will occupy the common storage location. On the flip side, multi-cloud environments offer more transparency through event logging, and enterprise-wide solutions via automation tools. Solutions, once detected, can be instantly deployed across cloud networks. In recent years, risk management strategies specifically for the cloud have emerged, and these just have to be tested for the multi-cloud environments.



Quote for the day:


"There are some among the so-called elite who are overbearing and arrogant. I want to foster leaders, not elitists." -- Daisaku Ikeda


Daily Tech Digest - October 18, 2019

Critical PDF Warning: New Threats Leave Millions At Risk

Keyboard key - pdf file
The PDFex vulnerability exploits ageing technology which was not designed with contemporary security considerations in mind. In essence, taking advantage of the very universality and portability of the PDF format. And while it might seem like a fairly specific attack, most companies rely on secured PDF documents for the transmission of contracts, board papers, financial documents, transactional data. There is an expectation that such documents are secure. Clearly, they are not. The PDFex attack is designed to exfiltrate the encrypted data to the attacker when the document is opened with a password—being decrypted in the process. The PDFex researchers, “in cooperation with the national CERT section of BSI,” have contacted all vendors, “provided proof-of-concept exploits, and helped them fix the issues.” Of even more concern, is the multiple vulnerabilities that have been disclosed and which impact the popular Foxit Reader PDF application specifically—Foxit claims it has 475 million users. Affecting Windows versions of Foxit’s reader, the vulnerabilities enable remote code execution on a target machine. 


Much-attacked Baltimore uses ‘mind-bogglingly’ bad data storage

After the attack in May, Baltimore Mayor Bernard C. “Jack” Young not only refused to pay, he also sponsored a resolution, unanimously approved by the US Conference of Mayors in June 2019, calling on cities to not pay ransom to cyberattackers. Baltimore’s budget office has estimated that due to the costs of remediation and system restoration, the ransomware attack will cost the city at least $18.2 million: $10 million on recovery, and $8.2 million in potential loss or delayed revenue, such as that from property taxes, fines or real estate fees. The Robbinhood attackers had demanded a ransom of 13 Bitcoins – worth about US $100,000 at the time. It may sound like a bargain compared with the estimated cost of not caving to attackers’ demands, but paying a ransom doesn’t ensure that an entity or individual will actually get back their data, nor that the crooks won’t hit up their victim again. The May attack wasn’t the city’s first; nor was it the first time that its IT systems and practices have been criticized in the wake of attack.


'The Dukes' (aka APT29, Cozy Bear) threat group resurfaces with three new malware families

'The Dukes' threat group resurfaces with three new malware families
According to researchers, three new malware samples, dubbed FatDuke, RegDuke and PolyglotDuke, linked to a cyber campaign most likely run by APT29. The most recent deployment of these new malwares was tracked in June 2019. The ESET researchers have named all activities of Apt29 (past and present) collectively as Operation Ghost. This cyber campaign has been running since 2013 and has successfully targeted the Ministries of Foreign Affairs in at least three European countries. The researchers compared the techniques and tactics used by APT29 in its recent attacks to those used in group's older attacks. They found many similarities in these campaigns, including the use of Windows Management Instrumentation for persistence, use of steganography in images to hide communications with Command and Control (C2) servers, and use of social media, such as Reddit and Twitter, to host C2 URLs. The researchers also found similarities in the targets hit during the newer and older attacks - ministries of foreign affairs.


Misconfigured Containers Open Security Gaps

Image: Coloures Pic - stock.adobe.com
The knowledge gap surrounding security risks and the blunders it causes are, by far, the biggest threat to organizations using containers, observed Amir Jerbi, co-founder and CTO of Aqua Security, a container security software and support provider. "Vulnerabilities in container images -- running containers with too many privileges, not properly hardening hosts that run containers, not configuring Kubernetes in a secure way -- any of these, if not addressed adequately, can put applications at risk," he warned. Examining the security incidents targeting containerized environments over the past 18 months, most were not sophisticated attacks but simply the result of IT neglecting basic best practices. he noted. ... While most container environments meet basic security requirements, they can also be more tightly secured. It's important to sign your images, suggested Richard Henderson, head of global threat intelligence for security technology provider Lastline. "You should double-check that nothing is running at the root level."


Microsoft and Alibaba Back Open Application Model for Cloud Apps


OAM is a standard for building native cloud applications using "microservices" and container technologies, with a goal of establishing a platform-agnostic approach. It's kind of like the old "service-oriented architecture" dream, except maybe with less complexity. The OAM standard is currently at the draft stage, and the project is being overseen by the nonprofit Open Web Foundation. Microsoft apparently doesn't think too highly of the Open Web Foundation as its "goal is to bring the Open Application Model to a vendor-neutral foundation," the announcement explained. Additionally, Microsoft and Alibaba Cloud disclosed that there's an OAM specification specifically designed for Kubernetes, the open source container orchestration solution for clusters originally fostered by Google. This OAM implementation, called "Rudr," is available at the "alpha" test stage and is designed to help manage applications on Kubernetes clusters. ... Basic OAM concepts can be found in the spec's description. It outlines how the spec will account for the various roles involved with building, running and porting cloud-native apps.


Why AI Ops? Because the era of the zettabyte is coming.


“It’s not just the amount of data; it’s the number of sources the data comes from and what you need to do with it that is challenging,” Lewington explains. “The data is coming from a variety of sources, and the time to act on that data is shrinking. We expect everything to be real-time. If a business can’t extract and analyze information quickly, they could very well miss a market or competitive intelligence opportunity.” That’s where AI comes in – a term originally coined by computer scientist, John McCarthy, in 1956. He defined AI as “the science and engineering of making intelligent machines.” Lewington thinks that the definition of AI is tricky and malleable, depending on who you talk to. “For some people, it’s anything that a human can do. To others, it means sophisticated techniques, like reinforcement learning and deep learning. One useful definition is that artificial intelligence is what you use when you know what the answer looks like, but not how to get there.” No matter what definition you use, AI seems to be everywhere. Although McCarthy and others invented many of the key AI algorithms in the 1950s, the computers at that time were not powerful enough to take advantage of them.


Fake Tor Browser steals Bitcoin from Dark Web users


Purchases made in these marketplaces are usually done so using cryptocurrency such as Bitcoin (BTC) in order to mask the transaction and user's identity. If a user visits these domains and tries to make a purchase by adding funds to their wallet, the script activates and attempts to change the wallet address, thereby ensuring funds are sent to an attacker-controlled wallet instead. The payload will also try to alter wallet addresses offered by Russian money transfer service QIWI. "In theory, the attackers can serve payloads that are tailor-made to particular websites. However, during our research, the JavaScript payload was always the same for all pages we visited," the researchers say. It is not possible to say how widespread the campaign is, but the researchers say that PasteBin pages promoting the Trojanized browser have been visited at least half a million times, and known wallets owned by the cybercriminals have 4.8 BTC stored -- equating to roughly $40,000. ESET believes that the actual value of stolen funds is likely to be higher considering the additional compromise of QIWI wallets.


Server Memory Failures Can Adversely Affect Data Center Uptime

The Intel® MFP deployment resulted in improved memory reliability due to predictions based on the capture of micro-level memory failure information from the operating system’s Error Detection and Correction (EDAC) driver, which stores historical memory error logs. Additionally, by predicting potential memory failures before they happen, Intel® MFP can help improve DIMM purchasing decisions. As a result, Tencent was able to reduce annual DIMM purchases by replacing only DIMMs that have a high likelihood to cause server crashes. Because Intel® MFP is able to predict issues at the memory cell level, that information can be used to avoid using certain cells or pages, a feature known as page offlining, which has become very important for large scale data center operations. Tencent was therefore able to improve their page offlinging policies based on Intel® MFP’s results. Using Intel® MFP, server memory health was analyzed and given scores based on cell level EDAC data.


Three Keys To Delivering Digital Transformation

growth
More digitally mature organisations are beginning to view digital transformation as not just an internal technology infrastructure upgrade. It is more than an opportunity to move costly in-house capabilities to the cloud, or shift sales and marketing to online multi-channel provision. The focus today is on a more fundamental review of business practices, a realignment of operations toward core values, and a stronger relationship between creators and consumers of services. Within this context, digital modernisation programmes taking place across many organisations are accelerating the digitisation of their core assets, rebalancing spending toward digital engagement channels, fixing flaws in their digital technology stacks, and replacing outdated technology infrastructure with cloud-hosted services. Such programmes are essential for organisations to remain competitive and relevant in a world that increasingly rewards those that can adapt quickly to market changes, raise the pace of new product and service delivery, and maintain tight stakeholder relationships.


Virtual voices: Azure's neural text-to-speech service


Microsoft Research has been working on solving this problem for some time, and the resulting neural network-based speech synthesis technique is now available as part of the Azure Cognitive Services suite of Speech tools. Using its new Neural text-to-speech service, hosted in Azure Kubernetes Service for scalability, generated speech is streamed to end users. Instead of multiple steps, input text is first passed through a neural acoustic generator to determine intonation before being rendered using a neural voice model in a neural vocoder. The underlying voice model is generated via deep learning techniques using a large set of sampled speech as the training data. The original Microsoft Research paper on the subject goes into detail on the training methods used, initially using frame error minimization before refining the resulting model with sequence error minimisation. Using the neural TTS engine is easy enough. As with all the Cognitive Services, you start with a subscription key and then use this to create a class that calls the text-to-speech APIs.



Quote for the day:


"A person must have the courage to act like everybody else, in order not to be like anybody." -- Jean-Paul Sartre


Daily Tech Digest - October 17, 2019

4 tips to help data scientists maximise the potential of AI and ML
4 tips to help data scientists maximise the potential of AI and ML
With machine learning, business process scalability has made leaps and bounds, but it’s important not to get side-tracked by that, according to Edell. Instead, focus on the things that are going wrong, rather than attempting to improve the things that are already working. “The most common mistake really anyone can make when building an ML solution is to lose sight of the problem they are trying to solve,” he said. “As such, we can spend a lot of time making the tech better, but forgetting why we’re using the tech in the first place. “For example, we may spend a lot of time and money improving the accuracy of a face recognition engine from 92pc to 95pc, when we could have spent that time improving what happens when the face recognition is wrong – which might bring more value to the customer than an incremental accuracy improvement.” The potential that emerging technologies can have for overcoming challenges with data science, no matter the industry, is monumental. But for the sectors that are client and consumer-facing, the needs of customers should still come first.



Velocity and Better Metrics: Q&A with Doc Norton
First of all, as velocity is typically story points per iteration and story points are abstract and estimated by the team, velocity is highly subject to drift. Drift is subtle changes that add up over time. You don’t usually notice them in the small, but compare over a wider time horizon and it is glaringly obvious. Take a team that knows they are supposed to increase their velocity over time. Sure enough, they do. And we can probably see that they are delivering more value. But how much more? How can we be sure? In many cases, if you take a set of stories from a couple of years ago and ask this team to re-estimate them, they’ll give you an overall number higher than the original estimates. My premise is that this is because our estimates often drift higher over time. The bias for larger estimates isn’t noticeable from iteration to iteration, but is noticeable over quarters or years. You can use reference stories to help reduce this drift, but I don’t know if you can eliminate it. Second of all, even if you could prove that estimates didn’t drift at all, you’re still only measuring one dimension - rate of delivery.



'Graboid' Cryptojacking Worm Spreads Through Containers

'Graboid' Cryptojacking Worm Spreads Through Containers
This is the first time the researchers have seen a cryptojacking worm spread through containers in the Docker Engine (Community Edition). While the worm isn't sophisticated in its tactics, techniques or procedures, it can be repurposed by the command-and-control server to run ransomware or other malware, the researchers warn. The Unit 42 research report did not note how much damage Graboid has caused so far or if the attackers targeted a particular sector. "If a more potent worm is ever created to take a similar infiltration approach, it could cause much greater damage, so it's imperative for organizations to safeguard their Docker hosts," the Unit 42 report notes. "Once the [command-and-control] gains a foothold, it can deploy a variety of malware," Jay Chen, senior cloud vulnerability and exploit researcher at Palo Alto Networks, tells Information Security Media Group. "In this specific case, it deployed this worm, but it could have potentially leveraged the same technique to deploy something more detrimental. It's not dependent on the worm's capabilities."




Data Literacy—Teach It Early, Teach It Often Data Gurus Tell Conference Goers

No one can understand everything, he said. That’s why the “real sweet spot” is the communication between the data scientists and the experts in various fields of inquiry to determine what they are seeking from the data and how it can be used. And there’s also an ethical component so that the data are not used to arrive at false conclusions. Sylvia Spengler, the National Science Foundation’s program director for Information and Intelligence systems, said that solving today’s big questions requires an interdisciplinary approach across all the sciences. “We need a deep integration across a lot of disciplines,” she said. “This is made for data science and data analytics. But it puts a certain edge on actually being able to deal with the kinds of data coming at you because they are so incredibly different.” Spengler said this integration can only happen through teams of people working on it. “You have to be able to collaborate. Those soft skills are critical. It’s not just your brains but your empathy because it makes you capable of taking multiple perspectives,” she said.


Linux security hole: Much sudo about nothing


At first glance the problem looks like a bad one. With it, a user who is allowed to use sudo to run commands as any other user, except root, can still use it to run root commands. For this to happen, several things must be set up just wrong.  First the sudo user group must give a user the right to use sudo but doesn't give the privilege of using it to run root commands. That can happen when you want a user to have the right to run specific commands that they wouldn't normally be able to use. Next, sudo must be configured to allow a user to run commands as an arbitrary user via the ALL keyword in a Runas specification. The last has always been a stupid idea. As the sudo manual points out, "using ALL can be dangerous since in a command context, it allows the user to run any command on the system." In all my decades of working with Linux and Unix, I have never known anyone to set up sudo with ALL. That said, if you do have such an inherently broken system, it's then possible to run commands as root by specifying the user ID -1 or 4294967295. Thus, if the ALL keyword is listed first in the Runas specification, an otherwise restricted sudo user can then run root commands.



News from the front in the post-quantum crypto wars with Dr. Craig Costello
Image of Dr. Craig Costello for the Microsoft Research Podcast
One good thing was that this notion of public key cryptography, it arrived, I suppose, just in time for the for the internet. In the seventies, the invention of public key cryptography came along, and that’s the celebrated Diffie-Hellman protocol that allows us to do key exchange. Public key cryptography is kind of a notion that sits above however you try to instantiate it. So public key cryptography is a way of doing things, and how you choose to do them, or what mathematical problem you might base it on, I guess, is how you instantiate public key cryptography. So initially, the two proposals that were proposed back in the seventies were what we call the discrete log problem in finite field. When you compute powers of numbers, and you do them in a finite field, you might start with a number and compute some massive power of it and then give someone the residue, or the remainder, of that number and say, what was the power that I raised this number to in the group? And the other problem is factorization, so integer factorization.



Beamforming explained: How it makes wireless communication faster
future wifi
The mathematics behind beamforming is very complex (the Math Encounters blog has an introduction, if you want a taste), but the application of beamforming techniques is not new. Any form of energy that travels in waves, including sound, can benefit from beamforming techniques; they were first developed to improve sonar during World War II and are still important to audio engineering. But we're going to limit our discussion here to wireless networking and communications.  By focusing a signal in a specific direction, beamforming allows you deliver higher signal quality to your receiver — which in practice means faster information transfer and fewer errors — without needing to boost broadcast power. That's basically the holy grail of wireless networking and the goal of most techniques for improving wireless communication. As an added benefit, because you aren't broadcasting your signal in directions where it's not needed, beamforming can reduce interference experienced by people trying to pick up other signals. The limitations of beamforming mostly involve the computing resources it requires; there are many scenarios where the time and power resources required by beamforming calculations end up negating its advantages.


A First Look at Java Inline Classes
The goal of inline classes is to improve the affinity of Java programs to modern hardware. This is to be achieved by revisiting a very fundamental part of the Java platform — the model of Java's data values. From the very first versions of Java until the present day, Java has had only two types of values: primitive types and object references. This model is extremely simple and easy for developers to understand, but can have performance trade-offs. For example, dealing with arrays of objects involves unavoidable indirections and this can result in processor cache misses. Many programmers who care about performance would like the ability to work with data that utilizes memory more efficiently. Better layout means fewer indirections, which means fewer cache misses and higher performance. Another major area of interest is the idea of removing the overhead of needing a full object header for each data composite — flattening the data. As it stands, each object in Java's heap has a metadata header as well as the actual field content. In Hotspot, this header is essentially two machine words — mark and klass.


Developer Skills for Successful Enterprise IoT Projects
Successful IoT project
The first step of any successful IoT project is to define the business goals and build a proof-of-concept system to estimate if those goals are reachable. At this stage, you need only a subset of the skills listed in this article. But once a project is so successful that it moves beyond the proof-of-concept level, the required breadth and depth of the team increases. Often, individual developers possess several of the skills. Sometimes, each skill on the list will require their own team. The amount of people needed depends both on the complexity of the project and on success. More success usually means more work but also more revenue that can be used to hire more people. Most IoT projects include some form of custom hardware design. The complexity of the hardware varies considerably between projects. In some cases, it is possible to use hardware modules and reference designs, for which a basic electrical engineering education is enough. More complex projects need considerably more experience and expertise. To build Apple-level hardware, you need an Apple-level hardware team and an Apple-level budget.


IoT in Vehicles: The Trouble With Too Much Code
The threat and risk surface of internet of things devices deployed in automobiles is exponentially increasing, which poses risks for the coming wave of autonomous vehicles, says Campbell Murray of BlackBerry. To get a sense of how complicated today's cars are, Campbell notes that while A380 airplane runs around four million lines of code, an average mid-size car has 100 million lines of code. Statistically, that means there are likely many software defects. Using a metric of .015 bugs per line of code, that means cars with that much code could have as many as 150 million bugs, Campbell says in an interview with Information Security Media Group. Reducing those code bases is one way to reduce the risks, he says. "It's absolutely astonishing - the number of vulnerabilities that could exist in there," Campbell says. Meanwhile, enterprises deploying IoT need to remember the principles of safe computing: assigning the least amount of privileges, using dual-factor authentication and strong access controls, he adds.



Quote for the day:

"Leading people is like cooking. Don't stir too much; It annoys the ingredients_and spoils the food" -- Rick Julian