Daily Tech Digest - June 30, 2018

Block diagram showing system impact due to a failed microservice
Resiliency is the capability to handle partial failures while continuing to execute and not crash. In modern application architectures — whether it be microservices running in containers on-premises or applications running in the cloud - failures are going to occur. For example, applications that communicate over networks are subject to transient failures. These temporary faults cause lesser amounts of downtime due to timeouts, overloaded resources, networking hiccups, and other problems that come and go and are hard to reproduce. ... You can’t avoid failures, but you can respond in ways that will keep your system up or at least minimize downtime. For example, when one microservice fails, its effects can cause the system to fail. ... Developers often use the Circuit Breaker and Retry patterns together to give retrying a break. Retry tries an operation again, but when it doesn’t succeed, you don’t always want to just try it one more time or you may risk prolonging the problem. The Circuit Breaker pattern effectively shuts down all retries on an operation after a set number of retries have failed.



The Humanoid Banker – Science Fiction or Future?
It is obvious that banking will also have to change and adapt to the new reality. Big events in the financial industry have had great impacts in the last 10 years. The global financial crisis from 2007-08 was just the ignition for numerous scandals followed by huge fines, leading to a flood of new regulations (AML, KYC, Basel III). Unfortunately, most regulations have yet to prove their benefits for the consumer and bank client. They have definitely made service delivery more bureaucratic, time-consuming and costly, reducing client satisfaction. And, of course, they have also made life for the bank staff more difficult, especially as resources are trimmed and cost-cutting continues. And this is all happening in the midst of a global shift of economic power towards the East. We should also note that the developed world, especially in Europe, is rather more technophobic than progressive. This sharply contrasts with the emerging Asian economies, driven by behemoth China. They are already now taking the lead in many technology and science disciplines, e.g., robotics, artificial intelligence, social media, smartphones, wearable technologies, internet of things and so on.



Machine learning evolution (infographic)

Deep learning’s improved accuracy in image, voice, and other pattern recognition have made Bing Translator and Google Translate go-to services. And enhancements in image recognition have made Facebook Picture Search and the AI in Google Photos possible. Collectively, these have put machine recognition capabilities in the hands of consumers in a big way. What will it take to make similar inroads in business? Quality training data, digital data processing, and data science expertise. It will also require a lot of human intelligence, such as language-savvy domain experts who refine computable, logically consistent business context to allow logical reasoning. Business leaders will have to take the time to teach machines and incorporate machine intelligence into more processes, starting with narrow domains. Some in the statistically oriented machine learning research “tribes”—the Connectionists, the Bayesians and the Analogizers, for example —will worry that “human-in-the-loop” methods advocated by the Symbolists aren’t scalable. However, we expect these human-to-machine feedback loops, that blend methods of several tribes, will become a lot more common inside the enterprise over the next few years.


Reinventing the healthcare sector with Artificial Intelligence

Photo: iStock
India is also joining a growing list of the countries that are using AI in the healthcare. The adoption of AI in India is being propelled by the likes of Microsoft and a slew of health-tech startups. For instance, Manipal Hospitals, headquartered in Bengaluru, is using IBM Watson for Oncology, a cognitive-computing platform, to assist physicians discover personalised cancer care options, according to an Accenture report. For cardiac care, Columbia Asia Hospitals in Bengaluru is leveraging startup Cardiotrack’s AI solutions to predict and diagnose cardiac diseases. “Last year the company embarked on Healthcare NExT, a Microsoft initiative which aims to accelerate healthcare innovation through AI and cloud computing. By working side-by-side with the healthcare industry’s most pioneering players, we are bringing Microsoft’s capabilities in research and product development to help healthcare providers, biotech companies and organizations across India use AI and the cloud to innovate,” said Anil Bhansali, Corporate Vice President, Cloud & Enterprise, Managing Director, Microsoft India (R&D) Private Limited.


Look For the ‘Human’ When Buying HR Technology


Cognitive data processing technologies have the ability to not only automate common tasks like generating and distributing reports, but also to perform duties as highly nuanced as career coaching across an entire employee base. Machine learning algorithms can learn an organization’s priorities for skills development, help assess an individual’s credentials, then make recommendations for training or positions to consider. Offered to employees through an existing learning management system or a mobile app, such innovations can facilitate large-scale corporate objectives of development and retention and give each and every worker the useful career information they crave today. Yet some new technologies have a wow factor that makes them seem useful or innovative when they may not be ready or reasonable to replace HR’s role. Virtual Reality (VR) is being touted for all sorts of uses, from eLearning to giving candidates a taste of what it’s like to work for a company. But these should be considered as augmenting traditional tactics, not entirely replacing them.


Some reasons to think about big data as data commons

Finally, the digital economy has generated a considerable fiscal distortion: companies that rely on digital business models pay on average half the effective tax rate of traditional companies, thanks to the “fluid” nature of their businesses and to the placement of their subsidiaries in countries with low tax regimes. In response thereto, the European Commission has freshly come up with two distinct legislative proposals to ensure a fair and efficient European tax system. In the light of the above, an alternative data management for a more equal and sustainable socio-economic digital environment is not only desirable but necessary. The DECODE project is building towards this direction: a digital economy where citizen data is not stored in silos located and handled in overseas countries but rather self-controlled and available for broader communal use, with appropriate privacy protections and value distribution outcomes. The Data Commons approach centres precisely around the pooling and collective management of individually-produced streams of personal and sensor data that, combined with public data of the cities, will offer data-driven services that better respond to individual and overall community needs.


Third Wave Dev + Ops: Self-Learning DevOps for a Self-Learning World

DevOps AI success, in other words, may be largely contingent upon continuous testing. After all, we want our AI-powered recommendation engines to guide customers toward the right items while they're still on our websites. Why wouldn't we similarly want our AI-powered DevOps to guide developers toward the right behaviors while they're still on a feature or fix? ... Most IT organizations still treat cybersecurity as its own functional silo. In five years, this approach won't work—especially since next-generation architectures, such as blockchain, that depend on innate code might run on imperfectly secured environments beyond enterprise IT teams' control. Security must therefore become intrinsic to DevOps QA—not something for someone else to clean up after the fact. Building security checks directly into the integrated development environment (IDE) helps to protect companies from increasingly sophisticated attacks designed to discover and exploit design flaws. Security-enabled DevOps also saves money and speeds time to market in precisely the same way that QA shift left does.


Every Android device from the last 6 years may be at risk to RAMPage vulnerability

androidrampage.jpg
While not impossible, RAMPage is more difficult to practically attack on end-user devices, partially as vendor-specific or device-specific issues make it more difficult to reliably create the conditions that allow for exploitation. Because of the degree of precision involved, it would theoretically be possible that the same model phone with DRAM from different vendors would have different avenues to attack, or that certain optional hardware protections of LPDDR4, if added at manufacturing time, would partially mitigate the attack, the paper noted. Additionally, while the RAMPage attack was only demonstrated on an LG G4, it is possible that it may be applicable toward iOS devices and other devices using LPDDR2, 3, or 4 chips and running software with similar memory management techniques. That said, the researchers have proposed a fix for RAMPage called GuardION. From their tests in the whitepaper, they found "results in a performance degradation of 6.6%, which we believe is still acceptable.


The Insane Amounts of Data We're Using Every Minute

By the looks of the research, things are only getting bigger. In 2012, there were approximately 2.2 billion active internet users. In 2017, active internet users reached 3.8 billion people -- nearly 48 percent of the world’s population. When it comes to social media, data usage is unsurprisingly high. Since last year, Snapchat alone saw a 294 percent increase in the amount of images shared per minute. Nearly 2.1 million snaps are shared every 60 seconds. On average, there are 473,400 tweets posted every minute, 49,380 Instagrams photos and 79,740 Tumblr posts. So who’s behind all this social media madness? Americans upped their internet usage by 18 percent since 2017, however it’s not all going to Snapchat and Twitter. Much of it is going to video-streaming services such as Netflix and YouTube. Since last year, Netflix saw a whopping 40 percent increase in streaming hours, going from 69,444 hours to 97,222. And YouTube videos have reached 4.3 million views per minute. Even the peer-to-peer transactions app Venmo saw a major data jump, with 32 percent more transactions processed every minute compared to last year. Overall, Americans use 3.1 million GB of data every minute.


Building a Robust and Extensive Security Architecture


Building a device security system is the first line of defense in ensuring IoT security. The security capabilities of devices need to be configured to match their functions and computing resources, including memory, storage and CPU. For weak devices, such as water and gas meters, where resources are limited and power consumption and cost are issues, basic security capabilities are a must. These include basic two-way authentication, DTLS+, encrypted transmission and remote upgradability, as well as lightweight and secure transmission protocols. Strong devices with more powerful computing capabilities that don’t have power consumption constraints and are operationally critical, such as industrial control terminals and car networking equipment, require advanced security capabilities, including trusted devices, intrusion detection, secure startup, and anti-virus protection. Device chip security and security for lightweight operating systems such as LiteOS need defense capabilities in line with the security protections of strong devices.



Quote for the day:


"Remember this: Anticipation is the ultimate power. Losers react; leaders anticipate." -- Tony Robbins


Daily Tech Digest - June 29, 2018

What is Julia? A fresh approach to numerical computing
Julia is a free open source, high-level, high-performance, dynamic programming language for numerical computing. It has the development convenience of a dynamic language with the performance of a compiled statically typed language, thanks in part to a JIT-compiler based on LLVM that generates native machine code, and in part to a design that implements type stability through specialization via multiple dispatch, which makes it easy to compile to efficient code. ... What we’re seeing here is that Julia code can be faster than C for a few kinds of operations, and no more than a few times slower than C for others. Compare that to, say, R, which can be almost 1,000 times slower than C for some operations. Note that one of the slowest tests for Julia is Fibonacci recursion; that is because Julia currently lacks tail recursion optimization.  ... Julia fans claim, variously, that it has the ease of use of Python, R, or even Matlab. These comparisons do bear scrutiny, as the Julia language is elegant, powerful, and oriented towards scientific computing, and the libraries supply a broad range of advanced programming functionality.



Using machine learning to understand the human immune system

antigen-map-hero.jpg
The human immune system works on multiples large enough to make your head spin. There are two billion lymphocytes in the body, among them what's known as 'helper' T cells, others as 'cytotoxic' or 'killer' T cells. Each T cell can recognise the antigens -- the triggers that will set off the immune system -- that are the signatures of bacteria, viruses, fungi or other invaders that have entered the body. Each T cell can bind to hundreds of different antigens, each potentially unique to a different bacteria or virus. Once a T cell has got a hit, depending on what type of T cell it is, it may kill the invader, or signal the millions of other immune cells to come and take on the wrongdoer too. Anyone taking a snapshot of the immune system when the T cells are activated, by noting which T cell receptors are activated and which antigens they bind to, could work out which disease has taken over the body. And, once the disease is known, doctors can see more clearly how it can be treated.



Why PGP is fundamentally flawed and needs to be fixed


From my vantage point, the biggest problem with encryption (outside of these vulnerabilities) is the fact that few people actually use it. Sure those that are really, really concerned about privacy and security will make use of PGP in some form or function, but the average user (of which there are hundreds of millions) wouldn't know PGP if it reached out from their monitor and slapped them across the face to say, "Pay attention to me!" There's a reason for this. The average user doesn't know where to begin working with encryption ... on any level. Try talking your parents through the usage of encryption in email. Watch their faces go slack as every word you utter flies over their heads. The thing is, if PGP (or OpenPG, or GPG, or GnuPG ... you get the idea) is to succeed, it needs to be used (and not by the enlightened few). Encryption needs to become the standard. For that to happen, it needs to be built into email clients such that users require zero hand-holding to understand or use the technology. But before that can happen, the implementations of PGP need to be fixed.


Storage class memory set to upset hardware architectures


The more expensive and lower capacity devices offer the capability to be used as a host or array cache that add the benefit of persistence compared with simply using DRAM. The extended endurance of these products compared with NAND flash also makes them more suited for write caching or as an active data tier. Hyper-converged infrastructure (HCI) solutions can take advantage of low latency persistent memory deployed into each host. Placing the persistent storage on the PCIe or even the memory bus will significantly reduce I/O latency. But this also risks exposing inefficiencies in the storage stack, so suppliers will want to be quick to identify and fix any issues. Disaggregated HCI solutions, such as those from Datrium and NetApp, should see large performance improvements. In both cases, the architecture is built on shared storage with local cache in each host. Performance is already high with NAND flash, but offers more resiliency (and less cache warm-up) with persistent caching using products such as Optane.


Data readiness is much more than mere cleanliness

Data set readiness comprises traditional data preparation ideas: data cleanliness and consistency, de-duplication and the management of unstructured data. (The seemingly simple task of mailing address normalization is a data preparation discipline in its own right.) In the world of the V’s – variety, volume, velocity, veracity, and even validity and volatility – the biggest challenge here is variety. Since data sets evolve over time as domain experts look for new insights and correlation with new data sources, some agility in the ability to acquire and integrate new data sets is a part of data set readiness, albeit in the “meta” sort of way where being ready to get more data ready is a prerequisite. Data pipeline readiness addresses some of the larger big data V’s: volume and velocity. Once you have models to execute, operationalizing them to operate reliably at scale and at business speed brings an entirely new set of challenges. Can your business handle the massive data flows? Can it handle them in an increasingly expeditious way?


Companies are struggling with security automation

First iOS trojan exploiting Apple DRM design flaws infects any iOS device
“The cybercrime landscape is incredibly vast, organized and automated – cybercriminals have deep pockets and no rules, so they set the bar,” said Amy James, Director of Security Portfolio Marketing at Juniper Networks. “Organizations need to level the playing field. You simply cannot have manual security solutions and expect to successfully battle cybercriminals, much less get ahead of their next moves. Automation is crucial.” The growing threat landscape and security skills gap facing cybersecurity teams demand that organizations implement automation for a stronger security posture. Respondents recognize this growing importance and how automation can improve productivity, address the growing volume of threats and reduce the rate of false positives. The top two benefits of security automation, according to respondents, are: increased productivity of security personnel (64 percent) and automated correlation of threat behavior to address the volume of threats (60 percent). Half or 54 percent of respondents say these automation technologies simplify the process of detecting and responding to cyber threats and vulnerabilities.


Will blockchain bring data ownership back to users?

Blockchain, a technology that has seen success in cryptocurrency and beyond through its security, efficiency and non-centralized control, has been seen as a way of democratizing data and putting ownership back into the hands of users. As compared to the current practices where ownership of user data is held by the enterprise, blockchain would enable the creation of a self-sovereign identity, where individuals control their own identities and personal data and are able to decide who to share it with, and to what extent. In addition, blockchain offers the possibility of micro-incentivizing people to share data at their own will, which can significantly disrupt current ways of working for industries such as advertising and content.  Organizations will need to come to terms with this new reality and be aligned with the changing mindsets and desires of their users when it comes to management of personal data. While a self-sovereign identity that is enabled by blockchain could revolutionize how personal data is managed, it does not come about without hurdles. For starters, the burden of managing and allocating access would have to be borne by the individual.


IT Mission Vision & Values Statements: Foundations For Success

IT mission, vision and values statements: Foundations for success
The difference between a vision statement and a mission statement can be confusing. Some enterprise vision statements are actually missions and vice versa. A good vision paints a picture of a desired future state. It appeals to the heart inspiring employees, customers, and other stakeholders to do their best. A good vision rarely changes, remaining constant through different leaders, economic circumstances, and challenges. A mission describes how the enterprise will get to the desired future state. It appeals to the head and is an anchor against which departments and programs can be measured to determine how well they support the enterprise. Missions evolve to reflect new challenges as intermediate goals are attained. ... Value statements describe the principles, beliefs and operating philosophy that shape culture. Strong values serve as a moral compass, guiding interactions among employees and providing a standard against which behaviors can be assessed. Passion, teamwork, integrity, diversity and quality are found in many enterprise value statements.


Slack outage causes disruption, but highlights importance of team chat

#slack signage
“For individuals and organizations using team collaboration tools such as Slack, real-time communications have become ubiquitous and second nature to their work,” said Raúl Castañón-Martínez, a senior analyst at 451 Research. High expectations for service availability, he added, mean that every downtime incident will be perceived as a “serious disruption.” “On the positive side, this signals that Slack has been successful in permeating the enterprise, and team collaboration tools are rapidly becoming a core productivity tool, alongside email and calendar,” he said. Slack did not provide Computerworld with further details of the cause of the outage, or the number of users affected. A spokesperson said: “On June 27th between 6:33am and 9:49am PT Slack experienced an outage where people could not connect to their workspace. We worked as quickly as possible to bring Slack back to normal for everyone, and we are continuing to investigate the cause of this issue. We’re deeply sorry for the disruption. Please see the Slack Status page for the latest information


Online banks struggle to stay connected with younger mobile users

J.D. Power’s findings imply that those digital-only institutions aren’t just competing with other banks. Consumer use of other apps, like Uber or Seamless, is influencing customers’ expectations in the banking sphere as well. Direct banks earned an overall satisfaction score of 863 out of 1,000 points in J.D. Power’s latest ranking, compared with traditional retail banks’ overall score of 806. Direct banks also scored higher than their traditional counterparts for service across all other banking channels, including websites and call centers. On mobile banking, however, direct banks held the narrowest lead over traditional retail banks, earning a score of 864 compared with traditional banks’ score of 850. Last year, both direct and traditional banks scored 872 on satisfaction with mobile channels. J.D. Power also found that direct bank customers’ awareness and usage of various mobile banking features had declined year over year across every single feature the firm tracks, including bill pay and person-to-person payments.



Quote for the day:


"Learn to appreciate what you have, before time makes you appreciate what you had." -- Anonymous


Daily Tech Digest - June 28, 2018

A closer look at Google Duplex


Duplex was — and still is — very much a work in progress. Among other things, the system didn’t provide a disclosure in the early days, a fact that could potentially violate the “two-party consent” required to record phone calls and conversations in states like Connecticut, Florida, Illinois, Maryland, Massachusetts, Montana, New Hampshire, Pennsylvania, Washington and Google’s own home base of California. “The consent-to-record issues here go beyond just Duplex to the broader legal implications of machine speech,” said Gabe Rottman, director of the Technology and Press Freedom Project at the Reporters Committee for Freedom of the Press. “If the service extends to all-party consent states or globally, you could see questions pop up like whether consent is valid if you don’t know the caller is a machine. Curveballs like that are just going to multiply the more we get into the uncanny valley where automated speech can pass as human.” Going forward, the system will be confined to those states where the laws make it feasible. That also applies to interstate calls, so long as both sides are covered.



10 Hottest Job Skills for Young IT Workers

Image: Pixabay
Daniel Culbertson, an economist at job posting site Indeed.com, says those younger workers are more attracted to technology jobs than older workers are. In addition, when workers under 40 go looking for a job, they tend to click on very different postings than their older counterparts do. For organizations that are looking to expand their head count in the tight labor market, attracting these younger workers can be critical for remaining competitive. That means they need to craft job postings that will appeal to the Millennials. The skills that attract attention from young job candidates can also serve as a sort of compass for where the technology industry is heading. Because technology changes so quickly, tech workers tend to look for jobs related to areas that they believe will become more important in the future. Their interests can highlight trends that are likely to remain relevant for some time. Culbertson ran an analysis of job seeker behavior on Indeed.com and come up with a list of terms that appeared most often in the job postings clicked by people under 40.


Will artificial intelligence bring a new renaissance?


Artificial intelligence and robotics were initially thought to be a danger to be blue-collar jobs, but that is changing with white-collar workers – such as lawyers and doctors – who carry out purely quantitative analytical processes are also becoming an endangered species. Some of their methods and procedures are increasingly being replicated and replaced by software. For instance, researchers at MIT's Computer Science and Artificial Intelligence Laboratory, Massachusetts General Hospital and Harvard Medical School developed a machine learning model to better detect cancer. They trained the model on 600 existing high-risk lesions, incorporating parameters like, family history, demographics, and past biopsies. It was then tested on 335 lesions and they found it could predict the status of a lesion which 97 per cent accuracy, ultimately enabling the researchers to upgrade those lesions to cancer. Traditional mammograms uncover suspicious lesions, then test their findings with a needle biopsy. Abnormalities would undergo surgeries, usually resulting in 90 per cent to be benign, rendering the procedures unnecessary.


How to secure cloud buckets for safer storage


Amazon Macie automatically discovers and classifies data stored inside Amazon S3 buckets using machine learning technology for natural language processing, and this might very well be the future. It is clear that human error cannot be reduced to zero, so putting near-real-time automated controls in to contain the risks once such an error inevitably occurs is a good approach. Another option is to enable Amazon's Default Encryption feature, which will automatically encrypt any file placed inside a bucket. Some other available features include Amazon's permission checks and alarms and the use of access control lists. It is also critical to monitor public access and API calls. Alerts should be set and actioned to cover the dumping of large amounts of files or large files in general. A SIEM can assist in correlating the required security event data for these alerts via rules and set thresholds. Data breaches through cloud storage are a problem that will not go away. There are many reasons why this topic is still such an issue, but there are mitigation options and there have been some promising developments in this space.


Robots Are Our Friends -- How Artificial Intelligence Is Leveling-Up Marketing

Robots Are Our Friends -- How Artificial Intelligence Is Leveling-Up Marketing
Not only is AI automating jobs we don’t want to do, it’s also opening the doors to jobs we can’t do. Since AI has the ability to process an infinitely larger dataset than a human can, it can leverage that scale to identify marketing insights that would otherwise be lost. Say you want to take the next step in that content-marketing data-collection project: You not only want to catalogue all of the “video marketing” content, but to catalogue all of the content being published in your industry more broadly. Ultimately, you'll want to use this catalogue to drive market-informed content campaigns of our own. Identifying new topics emerging or types of articles that garner above-average shares can help direct new content creation to align with existing trends. A given article could have many different qualities that could lead to its success. It’s AI’s ability to tag and compare many data points that ultimately produce the marketing takeaway. AI’s strength in turning a mass of data into insight truly shines in the noisiest, highest-volume channels that a marketer hopes to master.


Reduce breach risk and costs with security resilience

Reduce breach risk and costs with security resilience
Best effort is a familiar scenario for most IT shops. Either the security engineer, executive or another leader has said, “We need to install some level of security.” This typically involves implementing firewalls, basic security components, and maybe some basic auditing and monitoring. The next rung up the ladder is regulatory compliance. This is often an executive-level initiative. The thought is that business needs compel the company to be compliant to PCI, HIPAA, or some other standard. One might think this would make the security architecture more robust. Unfortunately, while compliance may be necessary for auditing purposes, it does not guarantee security. The third level is essentially the defensive approach — “I’m going to make this network so secure that no one is going to break into it.” This is when all those inline and out-of-band devices are deployed. You can even create defense-in-depth strategies for prevention. For instance, if someone gets through Port 80 on the firewall, the next step is to challenge the data with DPI (deep packet inspection). There are other things you can do as well, like implement prevention, detection, and response processes.


Crossing the Big Data / Data Science Analytics Chasm


Becoming more effective at leveraging data and analytics is forcing organizations to move beyond the world of Business Intelligence (BI) to embrace the world of predictive and prescriptive analytics. Business Intelligence is about descriptive analytics: retrospective analysis that provides a rearview mirror view on the business—reporting on what happened and what is currently happening. Predictive analytics is forward-looking analysis: providing future-looking insights on the business—predicting what is likely to happen and what one should do ... Unfortunately, with many companies with whom I talk and teach, there is an “analytics chasm” that is hindering the transition from descriptive questions to predictive analytic and prescriptive actions. This chasm is preventing organizations from fully exploiting the potential of data and analytics to power the organization’s business and operational models ... Forever in search of the technology “silver bullet” (the newest technology that magically solves the Analytics Chasm challenge), IT organizations continue to buy new technologies without a good understanding how what it takes to cross the Analytics Chasm.


How to get blockchains to talk to each other

A startup called Aion is developing what it calls a “token bridge” that will let holders of Ethereum-based tokens back up their assets on another blockchain—initially, one built and run by Aion—without duplicating the actual monetary supply, says Matthew Spoke, the company’s founder. The process relies on a group of computers, also called nodes, that have the ability to recognize valid transactions and write new ones to each chain, Spoke says. The nodes that form the bridge will also have a process for reaching agreement amongst themselves and deciding whether to respond to a certain transaction on one of the chains by executing a corresponding one on the other. Spoke says a big difference between the pre-internet days and the blockchain world is the money: today’s competing protocols are often backed by billions of dollars of investment. That will probably ensure that many will succeed, meaning the future will be ruled by numerous blockchains, he says, and interoperability will be key to mainstream adoption. Whatever we end up with, it probably won’t look like the internet—but it could be just as transformative.


Get ready for upcoming 6G wireless, too

“High frequencies, in the range of 100GHz to 1THz (terahertz),” will be used for 100Gbps 6G, the ComSenTer scientists from University of Santa Barbara say in a release. The group created the ComSenTer center, which is part of Semiconductor Research Corporation (SRC) at their school. For spectrum comparison, Verizon’s initial 5G millimeter trials (along with Qualcomm and Novatel Wireless) that are taking place now will only go as far up the spectrum as 39GHz. “Our center is simply the next-, next-generation of communication and sensing,” says Ali Niknejad, ComSenTer associate director and a UC Berkeley professor, on SRC’s website. It’s “something that may become ‘6G.’” “Extreme densification of communications systems, enabling hundreds and even thousands of simultaneous wireless connections” will be part of it, the researchers claim, “with 10 to 1,000 times higher capacity than the nearer-term 5G systems and network.” Medical imaging, augmented reality and sensing for the Internet of Things (IoT) are some of the applications the scientists say will be enhanced by faster-than-5G radios.


The Big Data Question: To Share or Not To Share

(Image: Odua Images/Shutterstock)
"People are realizing that the data they have has some value, either for internal purposes or selling to a data partner, and that is leading to more awareness of how they can share data anonymously," Mike Flannagan of SAP told InformationWeek in an interview earlier this year. He said that different companies are at different levels of maturity in terms of how they think about their data. Even if you share data that has been anonymized in order to train an algorithm, the question remains whether you are giving away your competitive edge when you share your anonymized data assets. Organizations need to be careful. "Data is extremely valuable," said Ali Ghodsi, co-founder and CEO of Databricks (the big data platform with its origins offering hosted Spark) and an adjunct professor at the University of California, Berkeley. In Ghodsi's experience, organizations don't want to share their data, but they are willing to sell access to it. For instance, organizations might sell limited access to particular data sets for a finite period of time. Data aggregators are companies that will create data sets to sell by scraping the web, Ghodsi said.



Quote for the day:


"A leader must have the courage to act against an expert's advice." -- James Callaghan


Daily Tech Digest - June 27, 2018

The Future of Decentralization


Just as alternative, renewable, natural and sustainable energy can contribute to a cleaner environment, so to the infrastructure of politics, economics and the motives of the web need to be remade with more intention as to truly allow technology to serve humanity and not the other way around. This will take time, likely decades if not longer. It will take the help of whatever artificial intelligence becomes. ... Decentralization requires a human-AI hybrid civilization. It requires automation, robots, new kinds of jobs, roles and values not yet invented or even implemented or imagined. Radical decentralization also requires leadership, humans with a different consciousness, that’s more inclusive, more augmented with data, more wise — than any human leaders in history. The future of decentralization is global, it’s not something Scandinavia or China arrives at before others, though certainly they may embody some elements of it before. A decentralized internet, like solutions on the blockchain will required upgrades, better iterations, improved software, smarter smart contracts, quantum infrastructures, AIs that police them to make sure they are what they were intended to be.


IT chiefs keep obsolete systems running just to keep data accessible

OPIS
One of the chief problems of keeping the aging systems running is related to security, as the research highlights. 87 per cent of the IT decision makers in the survey sample agree that legacy applications on older operating systems are more vulnerable to security threats. At the same time, 82 per cent recognize that old or legacy systems are rarely compatible with modern security and authentication methods. “On older systems some security vulnerabilities are harder – or even impossible – to resolve. If available at all, patches for new threats could be delayed because legacy apps are considered less of a priority,” says Jim Allum. “As legacy applications pre-date the latest security innovations there is a clear security risk to having a lot of legacy within your application portfolio.” A related issue is compliance, with 84 per cent of the sample agreeing that on old/legacy applications it is harder to accurately track and control access to sensitive data in line with stricter data privacy regulations such as the GDPR.



Global IoT security standard remains elusive


Despite the cacophony of approaches towards IoT security, Kolkman noted that most are underpinned by common IT security principles. “If you look at the different IoT security frameworks, there seems to be consensus on things like upgradability and data stewardship – even if there’s no global standard that describes it all,” he said. These principles are reflected in a set of enterprise IoT security recommendations released by the Internet Society this week. Among them is the need for companies to closely follow the lifecycle of IoT devices, which should be decommissioned once they are no longer updatable or secure. Meanwhile, the Internet Society’s Internet Engineering Task Force is also working on IoT standards in areas including authentication and authorisation, cryptography for IoT use cases and device lifecycle management. With cyber security at the top of most national security agendas today, Kolkman said the Internet Society has reached out to policy makers to provide recommendations about what they can do, such as setting minimum standards of IoT security and accountability.


A CIO on Carrying the Burden of Medical Device Cybersecurity

The situation "has created significant challenges, because ... those devices sit in our networks and infrastructures from the technology side, and we're now held responsible to remediate those issues," says Earle, who is also chair of the College of Healthcare Information Management Executives - or CHIME - board of trustees. "Many of those devices are very proprietary and it's very difficult to manage them because you would need to put in some kind of solution that ... monitors devices - and the proprietary nature of those devices makes that very challenging to do," he says in an interview with Information Security Media Group. "It's a lack of standards as well as a lack of characterization of those standards that makes this challenging. There's no true vulnerability disclosure associated with these devices. Suppliers should provide documentation of the vulnerabilities of their products like they would normally do for anything else in a situation like that. We need to ask for greater risk sharing."


Know your enemy: Understanding insider attacks


When an enterprise establishes an insider threat program, executives need to be aware of the potential negative effects this can have on employee morale and sensitivity to loss of privacy. Implementing an insider threat program mandates increased communication with the staff to explain the program, explain how they can help and offer frequent emphasis on program wins. The 2016 Ponemon Institute report "Tone at the Top and Third Party Risk," noted that "If management is committed to a culture and environment that embraces honesty, integrity and ethics, employees are more likely to uphold those same values. As a result, such risks as insider negligence and third party risk are minimized." An insider threat program should also include a steering board/committee. Ideally, such a committee should include representatives from law, intellectual property, the office of internal governance, global privacy, human resources, information technology, corporate communications and security.


The future of consumer MDM: Cloud, referential matching and automation

Current MDM technologies typically use “probabilistic” and “deterministic” matching algorithms to match and link consumer records across an enterprise and to ensure there is only one master record for each consumer. These algorithms match records by comparing the demographic data contained in those records—data such as names, addresses, and birthdates. But demographic data is notoriously error-prone, frequently incomplete and constantly falling out of date. And probabilistic and deterministic matching algorithms are only as accurate as the underlying demographic data they are comparing, meaning they are fundamentally limited in how accurate they can be by the availability and quality of the data. But there is a new paradigm in identity matching technology called “referential matching” that is not subject to these same fundamental limits. Rather than directly comparing the demographic data of two consumer records to see if they match, referential matching technologies instead compare the demographic data from those records to a comprehensive and continuously-updated reference database of identities.


The ML.Net project version 0.2 is available for .Net Core 2.0 and .Net Standard 2.0 with support for x64architecture only (Any CPU will not compile right now). It should, thus, be applicable in any framework where .Net Standard 2.0 (eg.: .Net Framework 4.6.1) is applicable. The project is currently on review. APIs may change in the future. Learning the basics of machine learning has not not been easy, if you want to use an object oriented language like C# or VB.Net. Because most of the time you have to learn Python, before anything else, and then you have to find tutorials with sample data that can teach you more. Even looking at object oriented projects like [1] Accord.Net, Tensor.Flow, or CNTK is not easy because each of them comes with their own API, way of implementing same things differently, and so on. I was thrilled by the presentations at Build 2018 [2] because they indicated that we can use a generic work-flow approach that allows us to evaluate the subject with local data, local .Net programs, local models, and results, without having to use a service or another programming language like Python.


Could blockchain be the missing link in electronic voting?

To bolster the security, accuracy and efficiency of elections, some suggest the implementation of blockchain technology. Blockchain is a decentralised, distributed, electronic ledger used to record transactions in such a way that transactions made using it can't be subsequently altered without the agreement of all parties. Thousands of network nodes are needed to reach consensus on the order of ledger entries. Most famously, blockchain is used for bitcoin transactions, but it's finding use cases in everything from storing medical records to authenticating physical transactions. Such is the level of interest in blockchain technology that governments are even examining its potential use cases. Blockchain-enabled elections have already taken place: In March, Sierra Leone voted in its presidential elections and votes in the West Districts were registered on a blockchain ledger by Swiss-based firm Agora. By storing the data in this way, election data was "third-party verifiable and protected against any possibility of tampering," the company said, with the results publicly available to view.


Secure by Default Is Not What You Think

Once a product is built to be secure by default, it still needs to remain that way once deployed in its environment, which is increasingly complex and interconnected. That’s why the first responder — the person installing the product, application, or database — is evermore important. To keep the organization and users safe, the first responder needs to apply general principles, such as configuring controls to be secure as possible, enabling encryption at rest and SSL/TLS secure communication channels, restricting access to applications or data only to those people who need it, and requiring authentication that relies on trusted identity sources. Certificate or key-based authentication also are considerations. General principles can guide administrators, yet one size does not fit all. Administrators also have to tailor approaches to specific environments. What banks need from their databases, applications, and other technologies, for instance, is different from what oil companies or intelligence agencies need. Whatever the industry, someone needs to watch the whole picture. For instance, a database sits between an application above it and an operating system below it.


Underground vendors can reliably obtain code signing certificates from CAs

code signing certificates
The researchers where also surprised to find that all vendors opt for selling the anonymous code signing certificates to the malware developers instead of providing a signing service for a fee. “All vendors claim that their certificates are freshly issued, that is, that they have not been stolen in any way but obtained directly from a CA. Further, all vendors claimed that they sell one certificate into the hands of just one client, and some even offered free or cheap one-time reissue if the certificate was blacklisted too soon. Vendors did not appear to be concerned with revocation, often stating that it usually ‘takes ages’ until a CA revokes an abused certificate,” they shared. “Some vendors even claim to obtain the certificate on demand, having the certificate issued once a customer pays half of the price. Interestingly, [one vendor] even claims that he always has a few publisher identities prepared and the customer can then choose which of these publisher names he wants to have his certificate issued on.”



Quote for the day:


"The highest reward for a man's toil is not what he gets for it but what he becomes by it." -- John Rushkin


Daily Tech Digest - June 26, 2018

Is It Time To Say Goodbye To Email?


Nearly all organizations that try to get rid of email start internally by switching to a cloud-based collaborative system that allows employees to chat, correspond and work together virtually. Some companies have even resorted to an automatic response when an internal email is sent reminding the sender that the email won’t be responded to and that they need to use the collaboration software instead. ... The happy medium between getting rid of email completely and keeping it as is is to modify it in someway. If the future of work relies on new technology and collaboration, it makes sense to imagine the next generation of email serving a similar purpose to pagers in the 1990s. If someone posts on the office’s collaborative system, sends a calendar invite, or tags you in a post, you could get an alert in your email that directs you to the correct system for the information. In the forward-thinking view of email, the purpose is to notify and direct instead of provide all the information. This system would seem to work better internally, but could also have success across organizations.



North American, UK, Asian regulators press EU on data privacy exemption

It also narrows an exemption for cross-border personal data transfers made in the “public interest” by imposing new conditions, including extra privacy safeguards, on its use, said the officials and legal experts. Under the previous law, regulators used the exemption to share vital information, such as bank and trading account data, to advance probes into a range of misconduct. For now, regulators are operating on the basis they can continue sharing such data under the new exemption but say doing so takes them into legally ambiguous territory because the new law’s language leaves room for interpretation. They fear that without explicit guidance, investigations such as current U.S. probes into cryptocurrency fraud and market manipulation in which many actors are based overseas, could be at risk. This is because in the absence of an exemption, cross-border information sharing could be challenged on the grounds that some countries’ privacy safeguards fall short of those now offered by the EU.


5 reasons the IoT needs its own networks

5 reasons the IoT needs its own networks
Despite having to be built from scratch, these new IoT networks can offer much less expensive service. T-Mobile, for example, offers a $6-a-year rate plan for machines on its new NB-IoT network. The company claims that’s 1/10 the cost of a similar Verizon service plan, but even $60 a year is far less expensive than a standard cellular connection. Just as important, the low-power devices that use these networks are much less expensive than standard LTE devices like mobile phones. As AT&T put it in a press release last year, "We can now reach new places and connect new things at a price that's more affordable than ever before.” ... Efficient use of scarce, expensive radio-frequency spectrum is the third reason dedicated IoT networks make sense. Both NB-IoT and LTE-M can be deployed in a very small slice spectrum channel compared to 4G deployments. NB-IoT can even be deployed in so-called LTE spectrum "guard bands" that sit between LTE channels to prevent interference. That means NB-IoT communications do not share spectrum resources with standard smartphone traffic, for example.


Tales from the Crypt(ography) Lab with Dr. Kristin Lauter

So that might sound like it’s in the stone age when we think of how fast technology evolves these days. But typically, for public key crypto systems over the last 40, 50 years, there have been roughly at least a 10-year time lag before crypto technologies get adopted. And that’s because the community needs to have time to think about how hard these problems are, and to set the security levels appropriately, and to standardize the technologies. So, we’re just getting to that point now where, kind of, almost 10 years after the first solutions were introduced, we’ve put together a broad coalition of researchers in industry, government and academia, to come together to try to standardize this technology. And we’re having our second workshop in March at MIT, where we’re going to try to get broad approval for our standard document, which recommends security parameters. So that’s the first challenge, is getting everyone to agree on what is the strength of these systems, kind of, essentially, how hard are these mathematical problems underneath, and then we plan to continue to build on that with this community, to get agreement on a common set of APIs


Ethical Data Science Is Good Data Science


When you work with 3rd parties, where your data is “better together,” should you share it all? No. This means enforcing fine-grained controls on your data. Not just coarse-grained role-based access control (RBAC), but down to the column and row level of your data, based on user attributes and purpose (more on that below). You need to employ techniques such as column masking, row redaction, limiting to an appropriate percentage of the data, and even better, differential privacy to ensure data anonymization. In almost all cases, your data scientists will thank you for it. It provides accelerated, compliant access to data and with that a great deal of comfort, freedom, and collaboration that comes when everyone knows they are compliant in what they are doing and can share work more freely. This freedom to access and share data comes when data controls are enforced at the data layer consistently and dynamically across all users. It provides the strong foundation needed to enable a high performing data science team.


Function Platforms with Chad Arimura and Matt Stephenson

“Serverless” is a word used to describe functions that get deployed and run without the developer having to manage the infrastructure explicitly. Instead of creating a server, installing the dependencies, and executing your code, the developer just provides the code to the serverless API, and the serverless system takes care of the server creation, the installation, and the execution. Serverless was first offered with the AWS Lambda service, but has since been offered by other cloud providers. There have also been numerous open source serverless systems. On SE Daily, we have done episodes about OpenWhisk, Fission, and Kubeless. All of these are built on the Kubernetes container management system. Kubernetes is an open-source tool used to build and manage infrastructure, so it is a useful building block for higher level systems.


Serverless cloud computing is the next big thing

Serverless cloud computing is the next big thing
Serverless computing in the cloud is a good idea—serverless computing is not just for the datacenter. Serverless cloud computing means the ability to get out of the business of provisioning cloud-based servers, such as storage and compute, to support your workloads, and instead use autiation at the cloud provider to allocate and deallocate resources automatically. ... We’re witnessing a reengineering of public cloud services to use a serverless approach. First, we’re seeing resource-intensive services such as compute, storage, and databases, but you can count on the higher-end cloud services being added to the list over time, including machine learning and analytics. What this all means for the enterprise is that less work will be needed to figure out how to size workloads. This serverless trend should also provide better utilization and efficiency, which should lower costs over time. Still, be careful: I’ve seen the use of serverless computing lead to higher costs in some instances. So be sure to monitor closely. There is clearly a need for serverless cloud computing.


Latin American banks advance in digital transformation projects

In terms of consumer technology trends, mobile banking in the region has surpassed both online banking and traditional channels and has become the number one channel for banks today, the report says. Regional Internet banking client uptake is at 67 percent, compared to 79 percent in 2015, while mobile applications rose to 33 percent. Millennials appear to be the most important target segment for digital banking, followed by premium clients and "native", digital customers, according to the study. When it comes to enterprise technology trends, the report notes that over 60 percent of Latin American banks are implementing or testing cloud computing, chatbots and Big Data, while a minority (less than 22 percent) mentions Blockchain, Internet of Things and virtual reality. Some 13 percent of the banks surveyed mentioned they have plans to invest in a new core banking platform in the next year while 7 percent are updating their core system. While 70 percent of those polled consider other banks with better digital capabilities as the main threat, nine out of 10 banks surveyed consider fintechs as potential partners or acquisitions.


Why Intel won't patch TLBleed vulnerability, despite serious concerns for cloud users

lockcyber.jpg
Maybe Intel has solutions with less overhead. But Intel excluded us from conversation so we don't know what those solutions might be. So we follow a pattern of immediately releasing a rough solution, which we can retract if a cheaper solution becomes published." Intel's position on this is somewhat peculiar, as the company has indicated that existing mitigations are sufficient to prevent this issue, and has declined to request a CVE to identify the flaw, as is standard. The Register report also indicates that Intel has declined to pay a bug bounty for this discovery via HackerOne, which is within the scope of the requirements Intel lists as being a side-channel attack, which Gras indicated to The Register as "goalpost-moving." Exploitation of, and patches for, TLBleed are likely to be more technically involved than the OpenBSD strategy of disabling SMT entirely, as ensuring that schedulers do not place processes of different security levels in the same core is a significant undertaking. 


The C4 Model for Software Architecture


Ambiguous software architecture diagrams lead to misunderstanding, which can slow a good team down. In our industry, we really should be striving to create better software architecture diagrams. After years of building software myself and working with teams around the world, I've created something I call the "C4 model". C4 stands for context, containers, components, and code — a set of hierarchical diagrams that you can use to describe your software architecture at different zoom levels, each useful for different audiences. Think of it as Google Maps for your code. ... Level 2, a container diagram, zooms into the software system, and shows the containers (applications, data stores, microservices, etc.) that make up that software system. Technology decisions are also a key part of this diagram. Below is a sample container diagram for the Internet banking system. It shows that the Internet banking system (the dashed box) is made up of five containers: a server-side web application, a client-side single-page application, a mobile app, a server-side API application, and a database.



Quote for the day:


"When you practice leadership,The evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani


Daily Tech Digest - June 25, 2018

What Is A Zero-Day Exloit? A Powerful But Fragile Weapon

skull and crossbones in binary code
A zero-day is a security flaw that has not yet been patched by the vendor and can be exploited and turned into a powerful but fragile weapon. Governments discover, purchase, and use zero-days for military, intelligence and law enforcement purposes — a controversial practice, as it leaves society defenseless against other attackers who discover the same vulnerability. Zero-days command high prices on the black market, but bug bounties aim to encourage discovery and reporting of security flaws to the vendor. The patching crisis means zero-days are becoming less important, and so-called 0ld-days become almost as effective. A zero-day gets its name from the number of days that a patch has existed for the flaw: zero. Once the vendor announces a security patch, the bug is no longer a zero-day (or "oh-day" as the cool kids like to say). After that the security flaw joins the ranks of endless legions of patchable but unpatched old days. In the past, say ten years ago, a single zero-day might have been enough for remote pwnage. This made discovery and possession of any given zero-day extremely powerful.



Address network scalability requirements when selecting SD-WAN


Calculating scalability based on the number of sites can be trickier. Not only do scalability requirements include provisioning sufficient bandwidth for all your sites, but network architecture matters when considering the scale needed to support a large number of branches. Some SD-WAN offerings are designed to spin up a virtual pipe from every site to every other site and maintain it perpetually. That option puts a large burden of VPN management on the service, which grows exponentially with the number of sites. Other SD-WAN services may also depend on VPNs, but without the need to have each VPN on constantly. For example, the service might allow customers to precalculate some of the necessary operating parameters for the VPNs and instantiate them only when needed for a network session. This option can support far more nodes than the previous design. Still, other SD-WAN products take a different approach entirely, without big VPN meshes. These employ architectures where the work of supporting the N+1st site is the same as the work of supporting the second site. This design could support even more nodes.


Ex-Treasury Official Touts the Promise of Fintech ‘Sandboxes’

As it stands now, “there’s nothing that calls itself a sandbox” in the U.S., Crane said. But comments by a Treasury official on Thursday at a SIFMA event about an upcoming report by Treasury on such reg tools signals promise of movement. What exactly is a “regulatory sandbox”? As the RegTechLab report explains, it’s a new tool allowing companies “to test new products, services, delivery channels or business models in a live environment, subject to appropriate conditions and safeguards.” Regulators, the report continues, “have also taken other proactive steps to engage with industry directly, and in some cases pursued mechanisms less formal than sandboxes to facilitate testing or piloting of new innovations.” Craig Phillips, counselor to the Treasury secretary, weighed in on Thursday that the financial services landscape “has over 3,300 new fintech companies” and “over 20% of all personal loans” originate in the fintech marketplace. “We need a new approach by regulators that permits experimentation for services and processes,” said Phillips, adding that it could include regulatory sandboxes, aka innovation facilitators.


Adapting to the rise of the holistic application


A shift in mindset is needed. McFadin says it is much harder to call a project “done”, as each element can be changed or updated at any time. While the services can be more flexible, it is necessary to think differently about the role of software developers. Companies that have implemented agile development properly should be equipped to manage this change more effectively. However, those that just namecheck agile, or don’t engage in the process fully, may well struggle. Eric Mizell, vice-president of solution engineering at software analytics company OverOps, claims the new composable, containerised, compartmentalised world of software is creating headaches for those tasked with the challenge of maintaining the reliability of these complex applications. “Even within the context of monolithic applications, our dependence on 30-year old technology, such as logging frameworks to identify functional issues in production code, is sub-standard at best – within the context of microservices and holistic applications, it is nearly worthless,” says Mizell.


Blockchain Watchers Say Decentralized Apps Are Around The Corner

More than a decade ago, Apple had to deal with that perennial chicken-and-egg problem: finding killer apps that made people to want to buy an iPhone. Developers building apps on blockchain technology face the same dilemma. Not enough people are using browsers and tokens that run on a blockchain network, so it’s hard to amass the number of users needed to propel a new app to success. But that hasn’t stopped people from trying or researchers from divining that decentralized apps, or “dapps,” really are just around the corner. One recent report from Juniper Research, a market intelligence firm in the U.K., states that in the coming year we’ll see a “significant expansion” in the deployment of dapps built on blockchain technology. Regular iPhone and Android users should be able to download a dapp on their smartphone “by the end of the year,” Juniper's head of forecasting, Windsor Holden toldForbes, adding that the dapps most likely to first gain mass adoption would deal with verifying identity or tracking the provenance of products or food in the supply chain.


IoT could be the killer app for blockchain

abstract blockchain representation of blocks and nodes
The rise of edge computing is critical in scaling up tech deployments, owing to reduced bandwidth requirements, faster application response times and improvements in data security, according to Juniper research. Blockchain experts from IEEE believe that when blockchain and IoT are combined they can actually transform vertical industries. While financial services and insurance companies are currently at the forefront of blockchain development and deployment, transportation, government and utilities sectors are now engaging more due to the heavy focus on process efficiency, supply chain and logistics opportunities, said David Furlonger, a Gartner vice president and research fellow. For example, pharaceuticals are required by law to be shipped and stored in temperature-controlled conditions, and data about that process is required for regulatory compliance. The process for tracking drug shipments, however, is highly fragmented. Many pharmaceutical companies pay supply chain aggregators to collect the data along the journey to meet the regulatory standards.


Serverless Native Java Functions using GraalVM and Fn Project

The Fn Project is an open-source container-native serverless platform that you can run anywhere — any cloud or on-premise. It’s easy to use, supports every programming language, and is extensible and performant. It is an evolution of the IronFunctions project from iron.io and is mainly maintained by Oracle. So you can expect enterprise grade solution, like first class support for building and testing. It basically leverages of the container technology to run and you can get started very quickly, the only prerequisite is Docker installed. ... Java is often blamed as being heavy and not suitable for running as serverless function. That fame does not come from nothing, it sometimes needs a full JRE to run with slow startup time and high memory consumption compared to other native executables like Go. Fortunately this isn't true anymore, with new versions of Java you can create modular applications, compile Ahead-of-Time and have new and improved garbage collectors using both OpenJDK and OpenJ9 implementations. GraalVM is new flavor that delivers a JVM that supports multiple languages and compilation into native executable or shared library.


Data Science for Startups: Deep Learning


Deep learning provides an elegant solution to handling these types of problems, where instead of writing a custom likelihood function and optimizer, you can explore different built-in and custom loss functions that can be used with the different optimizers provided. This post will show how to write custom loss functions in Python when using Keras, and show how using different approaches can be beneficial for different types of data sets. I’ll first present a classification example using Keras, and then show how to use custom loss functions for regression. The image below is a preview of what I’ll cover in this post. It shows the training history of four different Keras models trained on the Boston housing prices data set. Each of the models use different loss functions, but are evaluated on the same performance metric, mean absolute error. For the original data set, the custom loss functions do not improve the performance of the model, but on a modified data set, the results are more promising.


REST API Error Handling — Problem Details Response

RFC 7807 defines a "problem detail" as a way to carry machine-readable details of errors in an HTTP response to avoid the need to define new error response formats for HTTP APIs. By providing more specific machine-readable messages with an error response, the API clients can react to errors more effectively, and eventually, it makes the API services much more reliable from the REST API testing perspective and the clients as well. In general, the goal of error responses is to create a source of information to not only inform the user of a problem but of the solution to that problem as well. Simply stating a problem does nothing to fix it — and the same is true of API failures. RFC 7807 provides a standard format for returning problem details from HTTP APIs. ... The advantages of using this can be a unification of the interfaces, making the APIs easier to build, test and maintain. Also, I think that more advantages will come in the future when more and more API providers will adjust to this standard.


Protecting IoT components from being physically compromised


Disruption of these industrial devices can cause catastrophic events in an international scale, hence the importance to implement security solutions in front of a variety of attack vectors. The sole purpose is to prevent the intrusion of unauthorized (external or internal) actors and avoid disruption of critical control processes. This is not a theory but rather a disturbing fact. In 2017, a group of researchers from Georgia Tech developed a worm named "LogicLocker" that caused several PLC models to transmit incorrect data to the systems they control and as a result led to harmful implications. The common security methods of industrial networks are based mainly on the integration of dedicated network devices which are connected to the traffic artery at central junctions (usually next to network switches). This security method sniffs the data flow between the PLCs themselves, between the PLCs and the cloud (public or private) and between the user interface (HMI) and the cloud.



Quote for the day:


"Always and never are two words you should always remember never to use." -- Wendell Johnson