Daily Tech Digest - May 17, 2018

7 Basic Rules for Button Design


Every item in a design requires effort by the user to decode. Generally, the more time needed for users to decode the UI the less usable it becomes for them. But how do users understand whether a certain element is interactive or not? They use previous experience and visual signifiers to clarify the meaning of the UI object. That’s why it so important to use appropriate visual signifiers (such as size, shape, color, shadow, etc.) to make the element look like a button. Visual signifiers hold an essential information value — they help to create affordances in the interface. Unfortunately, in many interfaces the signifiers of interactivity are weak and require interaction effort; as a result, they effectively reduce discoverability. If clear affordances of interaction are missing and users struggle with what is “clickable” and what is not, it won’t matter how cool we make the design. If they find it hard to use, they will find it frustrating and ultimately not very usable. Weak signifiers is an even more significant problem for mobile users. In the attempt to understand whether an individual element is interactive or not, desktop users can move the cursor on the element and check whether the cursor changes its state. Mobile users don’t have such opportunity.



Serverless deployment lifts enterprise DevOps velocity


At a tipping point of serverless expertise, enterprises will start to put existing applications in serverless architectures as well. Significant challenges remain when converting existing apps to serverless, but some mainstream companies have already started that journey. Smart Parking Ltd., a car parking optimization software maker based in Australia, moved from its own data centers in Australia, New Zealand and the U.K. to AWS cloud infrastructure 18 months ago. Its next step is to move to an updated cloud infrastructure based on Google Cloud Platform, which includes Google Cloud Functions serverless technology, by June 2018. "As a small company, if we just stayed with classical servers hosted in the cloud, we were doing the same things the same way, hoping for a different outcome, and that's not realistic," said John Heard, CTO at Smart Parking. "What Google is solving are the big questions around how you change your focus from writing lots of code to writing small pieces of code that focus on the value of a piece of information, and that's what Cloud Functions are all about," he added.


3 reasons why hiring older tech pros is a smart decision

istock-869391194.jpg
For software engineers in particular, experience counts a lot, Matloff said. "The more experienced engineers are far better able to look down the road, and see the consequences of a candidate code design," he added. "Thus they produce code that is faster, less bug-prone, and more extendible." And in data science, recent graduates may know a number of techniques, but often lack the ability to use them effectively in the real world, Matloff said. "Practical intuition is crucial for effective predictive modeling," he added. Older tech workers also typically have more experience in terms of management and business strategy, Mitzner said. Not only can they offer those skills to the company, they can also act as mentors to younger professionals and pass on their knowledge, she added. "Most people who have been successful in their career would say that they had a great mentor," Mitzner said. "If you have a business that's all 20s to 30s, you could be really missing out on that." Many older employees also appreciate the same flexibility that younger workers do, as they balance work and home life with aging parents and children reaching adulthood, said Sarah Gibson, a consultant with expertise on changing generations in the workforce.


Computes DOS: Decentralized Operating System

“Computes is more like a decentralized operating system than a mesh computer,” replied one of our most active developer partners yesterday. He went on to explain that Computes has all of the components of a traditional computer but designed for decentralized computing. The more I think about it — the more profound his analogy is. We typically describe Computes as a decentralized peer-to-peer mesh supercomputing platform optimized for running AI algorithms near realtime data and IoT data streams. Every machine running the Computes nanocore agent can connect, communicate, and compute together as if they were physical cores within a single software-defined supercomputer. In light of yesterday’s discussion, I believe that we may be selling ourselves short on Computes’ overall capabilities. While Computes is uniquely well positioned for enterprise edge and high performance computing, most of our beta developers seem to be building next generation decentralized apps (Dapps) on top of our platform.


GDPR impact on Whois data raising concern


Cyber criminals typically register a few hundred, even thousands, of domains for their activities, and even if fake details are used, registrants have to use a real phone number and email address, which is enough for the security community to link associated domains. Using high-speed machine-to-machine technology and with full access to Whois data, Barlow said organisations such as IBM were able to block millions of spam messages or delay activity coming from domains associated with the individuals linked to spam messages. While the GDPR is designed to enhance the privacy of individuals, it is having the unintended effect of encouraging domain registrars not to submit registration details to the RDS, which means the information is incomplete and of less value to cyber crime fighters. Without access to Whois data, IBM X-Force analysts predict it might take more than 30 days to detect malicious domains by other methods, leaving organisations at the mercy of cyber criminals during that period.


Brush up on microservices architecture best practices


Isolating and debugging performance problems is inherently harder in microservice-based applications because of their more complex architecture. Therefore, productively managing microservices' performance calls for having a full-fledged troubleshooting plan. In this follow-up article, Kurt Marko elaborated on what goes into successful performance analysis. Effective examples of the practice will incorporate data pertaining to metrics, logs and external events. To then make the most use of tools like Loggly, Splunk or Sumo Logic, aggregate all of this information into one unified data pool. You might also consider a tool that uses the open source ELK Stack. Elasticsearch has the potential to greatly assist troubleshooters in identifying and correlating events, especially in cases where log files don't display the pertinent details chronologically. The techniques and automation tools used for conventional monolithic applications aren't necessarily well-suited to isolate and solve microservices' performance problems.


More Attention Needs to be on Cyber Crime, Not Cyber Espionage

Cyber crime remains a global problem that continues to be innovative and all-encompassing. What’s more, cyber crime doesn’t focus solely on organizations but also on individuals. The statistics demonstrates the magnitude of the cyber crime onslaught. According to a 2017 report by one company, damages incurred by cyber crime is expected to reach USD $6 trillion by 20201. Conversely, cyber security investment is only expected to reach USD $1 trillion by 2021, according to the same source. Furthermore, data breaches continue to afflict individuals. During the first half of 2017, more than 2 billion records were victim of cyber theft, whereas “only” 721 million records were lost during the last half of 2016, a 164 percent increase. According to another reputable source, among the three major classifications of breaches impacting people were identity theft (69 percent), access to financial data (15 percent), and access to accounts (7 percent). With cyber crime communities existing all over the world, these groups and individuals offer professional business goods and services based on quality and reputation that serves to quickly weed out inferior performers, innovation and dependability are instrumental to success.


Data integrity and confidentiality are 'pillars' of cybersecurity


There are two pillars of information security: data integrity and confidentiality. Let's take a simple example: your checking account. Integrity means the number. When you go to an ATM or online or to a teller and check your balance, that number should be easily agreed upon by you and your bank. There should be a clear ledger showing who put money in, when and how much, and who took money out, when and how much. There shouldn't be any randomness; there shouldn't be people putting money in or taking money out without your knowledge or your permission. So, one pillar is making sure the integrity of information -- the code you're running, the executables of your applications -- should be the same ones the developer wrote. Just like the numbers in your bank account, the code you're running should not be tampered with. Then, there's confidentiality. You and your bank should be the only ones who know the numbers in your bank account. When you take confidentiality away from your checking account, it's the same problems when you apply that to your applications and infrastructure.


This new type of DDoS attack takes advantage of an old vulnerability

"Just like the much-discussed case of easily exploitable IoT devices, most UPnP device vendors prefer focusing on compliance with the protocol and easy delivery, rather than security," Avishay Zawoznik, security research team leader at Imperva, told ZDNet. "Many vendors reuse open UPnP server implementations for their devices, not bothering to modify them for a better security performance." Examples of problems with the protocol go all the way back to 2001, but the simplicity of using it means it is still widely deployed. However, Imperva researchers claim the discovery of how it can be used to make DDoS attacks more difficult to attack could mean widespread problems. "We have discovered a new DDoS attack technique, which uses known vulnerabilities, and has the potential to put any company with an online presence at risk of attack," said Zawoznik. Researchers first noticed something was new during a Simple Service Discovery Protocol (SSDP) attack in April. 


This article focuses on the Module System and Reactive Streams; you can find an in-depth description of JShell here, and of the Stack Walking API here. Naturally, Java 9 also introduced some other APIs, as well as improvements related to internal implementations of the JDK; you can follow this link for the entire list of Java 9 characteristics. The Java Platform Module System (JPMS) – the result of Project Jigsaw – is the defining feature of Java 9. Simply put, it organizes packages and types in a way that is much easier to manage and maintain. In this section, we’ll first go over the driving forces behind JPMS, then walk you through the declaration of a module. Finally, you’ll have a simple application illustrating the module system. ... Module descriptors are the key to the module system. A descriptor is the compiled version of a module declaration – specified in a file named module-info.java at the root of the module’s directory hierarchy. A module declaration starts with the module keyword, followed by the name of the module. The declaration ends with a pair of curly braces wrapping around zero or more module directives.



Quote for the day:


"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer


Daily Tech Digest - May 16, 2018


In terms of the legal implications of AI, vicarious liability and agency cannot be applied to AI in the same way as they would for employee liability. Due to the black box nature of AI and the lack of transparency in its reasoning, it is difficult to attribute liability. The Fairchild principles of a 'material increase to risk' could be applied in future to determine liability, but without legislative clarification, the position is not entirely clear. Furthermore, AI can monitor price changes within a market and react very quickly, thereby potentially stifling competition by creating a form of collusion in the market. The European Commission is currently taking the threat of AI in competition seriously and exploring solutions to resolve these types of issues. From an intellectual property perspective, legislation has not been updated to cover the ownership of AI-generated intellectual property. Companies will need to ensure ownership of any materials or intellectual property created by AI vests or is transferred to them. In terms of ethics, the law cannot cover every moral scenario. AI is already creating unintended gender, race and socio-economic bias based on the data it works with.



Force multipliers in cybersecurity: Augmenting your security workforce

Organizations are employing security automation and orchestration technologies to make sure that the right person, with the right data, is there at the right time to make decisions, he said. In cybersecurity, it is important that the organization is clear about what actions must be taken after an incident occurs. Automation technologies can make changes right away to contain the issue, he added, but just relying on technologies isn't enough to help prepare for today's advanced threats, he added. Organizations should also practice breach preparedness drills to test their response, he stressed.  Implementing these security orchestration and automation practices also relies on strong leadership that develops a team atmosphere, and teaches team members to work together during a crisis, he said. It will be important to exhibit these strong cultural traits during a breach, especially because cybersecurity playbooks can crack under pressure, he added. "People want to practice what it's like to go through a breach," he said. "Security orchestration gives you the technology to respond fast and encourages you to practice it so that [when things go wrong] you're ready."


What is predictive analytics? Transforming data into future insights

What is predictive analytics? Transforming data into future insights
Organizations use predictive analytics to sift through current and historical data to detect trends and forecast events and conditions that should occur at a specific time, based on supplied parameters. With predictive analytics, organizations can find and exploit patterns contained within data in order to detect risks and opportunities. Models can be designed, for instance, to discover relationships between various behavior factors. Such models enable the assessment of either the promise or risk presented by a particular set of conditions, guiding informed decision-making across various categories of supply chain and procurement events. ... While getting started in predictive analytics isn't exactly a snap, it's a task that virtually any business can handle as long as one remains committed to the approach and is willing to invest the time and funds necessary to get the project moving. Beginning with a limited-scale pilot project in a critical business area is an excellent way to cap start-up costs while minimizing the time before financial rewards begin rolling in. Once a model is put into action, it generally requires little upkeep as it continues to grind out actionable insights for many years.


Successful IoT deployment: The Rolls-Royce approach

"The IoT is useful when you know you can derive business benefit by making unknown processes visible," she says. "If you try and use sensors everywhere, you will get nowhere because it's too expensive and it's too imprecise. Rolls-Royce picks the places where its IoT solutions can make data visible, and which will create significant operational benefits. That, for me, is the key to a successful IoT deployment." Gorski advises other digital chiefs to analyse their business operations and understand where a lack of data transparency creates a headache. She has seen big-bang instrumentation projects happen and, for the most part, these are difficult to justify. "They end up being expensive to implement," says Gorski. "It's costly to transmit data and the business ends up with a patchwork quilt of information. It's important to remember there isn't a single solution for IoT instrumentation and you must bootstrap technology together from lots of different suppliers. All that bootstrapping adds costs and creates complexity."


The NHS is failing to deliver 'basic IT', says Matthew Swindells


“We are investing millions of pounds in technology, yet we’ve got six organisations that still can’t tell us what their waiting lists are. It’s not acceptable,” he said. Barts Health NHS Trust, for instance, hasn’t submitted a referral to treatment report to NHS England for nearly four years. “We walk around most hospitals and we’ve not known how many beds we have and how many patients are lying in them,” said Swindells. “We need to at the very least get the data that we capture back out. If we can’t do the basics, me going cap in hand to the treasury for another £10bn to sort IT out just sounds like fool’s money.”  He highlighted e-rostering as another example of failing to use data properly, saying most hospitals use an e-rostering system, which he described as a “glorified spreadsheet” and “expensive pieces of technology that are not enabling better rostering: not enabling the matching of staffing to clinical need, not enabling staff to be flexible about when they work and therefore making more available”.  “We have to make this stuff work well,” said Swindells.


Here’s what the big four U.S. mobile ISPs are doing with IoT

iot services network
The playing field might not remain in its current state for long, with the main issue being the proposed $26.5 billion merger between T-Mobile and Sprint. Partridge said that would be a game-changer for carrier-based IoT in the U.S. “In the consumer business, T-Mobile’s going to be in charge of that, they’ve been wildly successful – but I think in IoT, Sprint will have every opportunity to take the lead,” he said. The idea, after the combination, would be to make acquisitions aimed at strengthening the new company’s position on the enterprise side of service provisioning in general, and focused on IoT particularly, though there are a number of tactical options for pursuing such a strategy. The new company could get into fleet management, a la Verizon and AT&T, snap up IoT software companies and package their offerings into new branded services, move heavily into surveillance and security, or even hardware. “The playbook is fairly open in terms of that, but the goal is to get away from connectivity-only value, because that’s not the place to be,” according to Partridge.


GDPR: Less Than One Month Out, the Top 3 Struggles

Instead of a collective sigh, May 25 might create more of a collective grunt. Most privacy professionals know that although a lot of work has been done in the run-up to D-day, GDPR compliance will require a constant focus. It is a journey, not a final destination. Those organizations that treat May 25 as the endpoint of their compliance drive, will be proven wrong. Another distinction between organizations will be their levels of ambition. Some organizations will look at GDPR as a mere checklist approach, I call it the "lawyer" approach (with all due respect to the lawyers amongst you, including myself). Legal compliance is core, but an organization's ambition should aim to go beyond and create a true cultural change. I truly believe that these privacy leaders ultimately will be rewarded in the market, banking on what I call a "trust dividend", reaping the benefits of constant investments in this space. Even though there is a broad spectrum amongst organizations around GDPR compliance, there are also some common themes and questions. In my role as CA Technologies Chief Privacy Strategist, I have had the opportunity to discuss GDPR with organizations, both public and private. 


Location-based services move beyond mobile and into enterprise apps

Location-based services move beyond mobile and into enterprise apps
The battle for LBS relevance moves from companies that only support increasingly commoditized location data, which they license (e.g., mapping data for GPS), to those that can offer enhanced and supplemental services. Previously seen as an old-style GPS/mapping data company, the largest LBS company, HERE, is moving away from the old model, although not totally. It’s changing from just being a database to being a value-added supplier of a full range of LBS with its Open Location Platform. HERE has several partnerships with auto companies (Audi, BMW) and others (Intel, Oracle, Amazon Web Services, Microsoft) to add platform capabilities beyond their extensive mapping database. Those capabilities include value-added services such as tracking, traffic, safety services, and HD maps. HERE's main cloud-based LBS platform competitor, MapBox, offers similar services but does not include its own mapping database, instead allowing clients to link to their preferred mapping data. HERE and Mapbox have some distinct strategy differences: Mapbox relies on others' data sets and can connect as needed and by user preference. HERE has its own data sets and is looking to add value on top.


Threat analytics: Keeping companies ahead of emerging application threats

Applications which can be downloaded are particularly vulnerable to cyber criminals, as they can be isolated from the network and attacked indefinitely until their defences are broken. Due to so many people using their personal mobile devices for work purposes, a compromised app will not only attack the individual or the business entity that published the app but could also grant attackers access to enterprise networks. Any application on an app store can be downloaded by anyone, and that includes bad actors. If an app is lacking in protection, once downloaded a bad actor might reverse engineer the app leaving it vulnerable to wide-scale tampering; IP/PII theft or API attack. With the code being left so vulnerable, the threat is extremely likely to turn into a widespread attack resulting in a loss of customers, brand damage, lost revenue, and lost jobs. On the other hand, with a threat analytics solution in place from the start, apps can provide valuable insights to the business the moment they are downloaded from an app store, thereby closing the loop.


Optimizing an artificial intelligence architecture: The race is on


Today, most AI workloads use a preconfigured database optimized for a specific hardware architecture. The market is going toward software-enabled hardware that will allow organizations to intelligently allocate processing across GPUs and CPUs depending on the task at hand, said Chad Meley, vice president of analytic products and solutions at Teradata. Part of the challenge is that enterprises use multiple compute engines to access multiple storage options. Large enterprises tend to store frequently accessed, high-value data such as customer, financials, supply chain, product and the like in high-performing, high I/O environments, while less frequently accessed big data sets such as sensor readings, web and rich media are stored in cheaper cloud object storage. One of the goals of composable computing is to use containerization to spin up computer instances such as SQL engines, Graph engines, machine learning engines and deep learning engines that can access data spread across these different storage options. 



Quote for the day:


"The task of leadership is not to put greatness into humanity, but to elicit it, for the greatness is already there." -- John Buchan


Daily Tech Digest - May 15, 2018

Why learning to code won't save you from losing your job to a robot

istock-840333536-1.jpg
That's not to say that we won't have a need for high-level coders, Burton said. Many engineers are solving difficult problems that require creativity, while others are performing important research. However, a lot of software being written today is "essentially glue code," Burton said. "It's putting together pieces that already exist. That's the sort of thing that starts to get automated." Where should people turn instead to future-proof their careers? The humanities, according to Burton. "The humanities start to become very important when you start to realize that technology is going to become very, very easy to use," Burton said. "The toolsets change, but what becomes important is the creativity, and particularly the understanding of the human mind. Because as long as humans are still the consumer, they're going to matter, and they're going to be demanding humans in some areas of the process." One new area that requires human intelligence is determining where humans tolerate technology, Burton said.


Hyperledger Sawtooth: Blockchain for the enterprise

"This isn't just that a node crashes or a third of the nodes on the network crash, but rather something like up to a third of the nodes on the network can be actively trying to corrupt the network, but are unable to do that," Middleton said. "This would be our goal for most deployments when you're putting some sort of business value onto the network. You want to know that it will be resilient to attack." Beyond these capabilities, Hyperledger Sawtooth also features on-chain governance, which uses smart contacts to vote on blockchain configuration settings as the allowed participants and smart contracts. Further, it has an "advanced transaction execution engine" that's capable of processing transactions in parallel to help speed up block creation and validation. But, arguably, one of Sawtooth's most intriguing benefits "is its proof of elapsed time, or PoET, consensus mechanism, which is a novel attempt to bring the resiliency of public blockchains to the enterprise realm -- without forgoing the requirements of security and scale," said Jessica Groopman, industry analyst and founding partner of Kaleido Insights.


The Value of Probabilistic Thinking


It is important to remember that priors themselves are probability estimates. For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor. Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually, some priors are replaced completely. This is an ongoing cycle of challenging and validating what you believe you know. When making uncertain decisions, it’s nearly always a mistake not to ask: What are the relevant priors? What might I already know that I can use to better understand the reality of the situation? Many of us are familiar with the bell curve, that nice, symmetrical wave that captures the relative frequency of so many things from height to exam scores. The bell curve is great because it’s easy to understand and easy to use. Its technical name is “normal distribution.” If we know we are in a bell curve situation, we can quickly identify our parameters and plan for the most likely outcomes.


A governance perspective on security audit policy settings

One common mistake that administrators make is failing to define adequate audit trails to enable early detection of security threats and allow for related investigations. The main reason for this oversight is a failure to balance audit trail needs and systems capacity. Some administrators argue that excessive auditing results in production of huge amounts of event logs that are unmanageable. Deciding on what to audit and what not to audit, or what may or may not be omitted, is therefore not just a configuration task, but rather a risk assessment task that should be embedded in the governance structures of the organization’s IT security frameworks. The audit needs of the organization are guided by the regulations, security threat models, information required for investigations and IT security policy to which the organization is subjected. Identification of the possible threats that the organization faces is usually carried out as part of risk assessment. Security events derived from audit policy settings are key risk indicators that the organization should use to measure how vulnerable the system is to the identified threats. 


gpu ai gaming
Because of their single-purpose design, GPU cores are much smaller than cores for CPUs, so GPUs have thousands of cores whereas CPUs max out at 32. With up to 5,000 cores available for a single task, the design lends itself to massive parallel processing. ... GPU use in the data center started with homegrown apps thanks to a language Nvidia developed called CUDA. CUDA uses a C-like syntax to make calls to the GPU instead of the CPU, but instead of doing a call once, it can be done thousands of times in parallel. As GPU performance improved and the processors proved viable for non-gaming tasks, packaged applications began adding support for them. Desktop apps, like Adobe Premier, jumped on board but so did server-side apps, including SQL databases. The GPU is ideally suited to accelerate the processing of SQL queries because SQL performs the same operation – usually a search – on every row in the set. The GPU can parallelize this process by assigning a row of data to a single core. Brytlyt, SQream Technologies, MapD, Kinetica, PG-Strom and Blazegraph all offer GPU-accelerated analytics in their databases. 


Sizing Up the Impact of Synthetic Identity Fraud

With recent data breaches and the associated flood of PII onto the dark web, synthetic identity fraud is easier to commit than ever. Credit card losses due to this fraud exceeded $800 million in the U.S. last year, says Julie Conroy, a research director at Aite Group. Perhaps more shocking is just how much of the fraud is going undetected, flying under the radar as credit write-offs. "One of the challenging aspects of this is often it doesn't get recognized as fraud and gets written off as a credit loss; so understanding the scope of the problem has been a challenge," Conroy says in an interview with Information Security Media Group about Aite's latest research. "A number of institutions are starting to see fundamental shifts to things like their credit delinquency curves that are only explainable by synthetic identity fraud." Migigating the risk of synthetic identity fraud is challenging, given that it's designed to look like a real person establishing a credit history. But Conroy suggests that a layered approach can be valuable.


Introducing The Open Group Open Fair Risk Analysis Tool


The tool is designed for international use, with the user able to select local currency units and the order of magnitude (thousands, millions, billions, etc.) relevant to the analysis. Embedded graphs are controlled through intuitive settings, letting analysts and management inspect the relevant results to a lesser or greater level of granularity as required. The tool further informs management by comparing and presenting statistical results such as the average annual loss exposure and user-defined percentile thresholds of loss and chance of exceedance of annual loss. The tool is genuinely versatile, making it equally suitable for the university professor or corporate trainer teaching quantitative risk analysis, as well as experienced corporate risk analysts who need an easy-to-use yet accurate risk evaluator for individual risk questions. In addition, to further support both the tool and the Open FAIR standards, The Open Group has also recently published a Risk Analysis Process Guide which offers some best practices for performing Open FAIR risk analysis, aiming to help risk analysts understand how to apply the Open FAIR risk analysis methodology.


12 Trends Shaping Identity Management

(Image by DRogatnev, via Shutterstock)
If the cybersecurity market is a globe, with each market segment taking its piece - one continent for endpoint security, an archipelago for threat intelligence - where would identity and access management fit? "Identity is its own solar system," says Robert Herjavec, CEO of global IT security firm Herjavec Group, and Shark Tank investor. "Its own galaxy." "The problem with users is that they’re interactive," he explains. The reason identity management is such a challenge for enterprises is because users get hired, get fired, get promotions, access sensitive filesystems, share classified data, send emails with potentially classified information, try to access data we don't have access to, try to do things we aren't supposed to try to do. Set-and-forget doesn't work on us. Brought to you by Mimecast. Luckily, great IAM is getting easier to come by. Herjavec points to identity governance tools like Sailpoint and Saviynt and privileged access management tools like CyberArk, saying that now "not only are they manageable, they’re fundamentally consumable from a price point."


How IoT And IoE Are Positively Disrupting The Farm-To-Fork Industry


Technologies such as UAVs and orbital satellites are becoming necessary for successfully utilizing fields, analyzing crops and providing proper interventions. Today’s technology allows data about extremely specific field observations to be delivered straight to a tablet or computer. From thousands of miles away, landowners can have satellites monitoring their fields and sending instant information on crop health to anyone anywhere in the world. These innovative technologies give farmers the ability to generate pertinent information about the health of their crops and their yield, identify problems and make important and well-educated decisions. Having all of this sensor technology is just one-step in providing food on the table for a constant and ever-growing population. An even bigger step is the implementation of these technologies globally for both developed and underdeveloped countries. Based on the FAO statistics, the nations with the largest population growth rates are also the poorest nations, requiring an even greater need for the technology-based interventions. 


istock 511068092
By accounting for disaster scenarios in your IT service management processes, you can integrate disaster recovery thinking into normal IT operations. This will reduce the possibility of extended system outages in a disaster, as well as provide you with a complete action plan for any incident, large or small. What happens if the disaster is less obvious? Do you know when to escalate those seemingly less harmful incidents and begin to initiate recovery procedures? By integrating your Disaster Recovery Plan into your overall IT service management processes, it becomes much clearer when it’s necessary to invoke disaster recovery procedures, rather than continuing to try and troubleshoot your way out of the situation. Knowledge is power, so the more you know about your systems and what to do in case of failures of any size, the less likely you are to experience a long service interruption. One of the best ways you can start the integration between your Disaster Recovery Plan and your IT service management is by performing a Business Impact Analysis on all your IT systems. 



Quote for the day:


"Thinking is the hardest work there is, which is probably the reason so few engage in it." -- Henry Ford


Daily Tech Digest - May 14, 2018

Next-Gen ERP: Finance Leaders Transform into Superheroes


Finance leaders across many small and midsize businesses (SMBs) are truly modern-day business superheroes, flexing their influence across entire companies more than ever. They own responsibilities spanning regulatory compliance; treasury and asset management; investor relationships; and strategic advice to the CEO, president, or owner – all while concurrently managing their traditional finance, budgeting, and accounting functions. Today’s finance leaders are evolving into prominence as they deal with significantly more business risks as markets change at an unrelenting pace and huge chunks of critical data become more readily available. Finance leaders don’t need “super vision” to see they must focus on making fact-driven decisions that potentially impact every area of the company – from recruiting to manufacturing and logistics. According to the SAP-sponsored Oxford Economics report, “How Finance Leadership Pays Off: Small and Midsize Business,” 82% of surveyed finance leaders are accepting this challenge. Yet, many still struggle with outdated technology and manual processes.



Crypto Fight: US Lawmakers Seek Freedom From Backdoors

Crypto Fight: US Lawmakers Seek Freedom From Backdoors
"It is troubling that law enforcement agencies appear to be more interested in compelling U.S. companies to weaken their product security than using already available technological solutions to gain access to encrypted devices and services," Lofgren says. Other House lawmakers co-sponsoring the bill are Thomas Massie, R-Ky.; Jerrold Nadler, D-N.Y; Ted Poe, R-Texas; Ted Lieu, D-Calif.; and Matt Gaetz, R-Fla. Their effort has earned plaudits from digital rights groups, including the Electronic Frontier Foundation, which on Thursday said that the bill "gets encryption right." The EFF's David Ruiz says in a blog post: "This welcome piece of legislation reflects much of what the community of encryption researchers, scientists, developers and advocates have explained for decade - there is no such thing as a secure backdoor." The move by technology vendors to strengthen data protections in their products has been fueled by ever-increasing cybercrime, hacking efforts sponsored by nation-states, and the scale of the mass surveillance programs being conducted by the U.S. and U.K. governments, as revealed in 2013 by former National Security Agency contractor Edward Snowden.


Google Duplex beat the Turing test: Are we doomed?

Modern AI scientists have called what became known as the Turing test somewhat simplistic, because computer intelligence can be seen in a wide variety of actions beyond the imitation of human conversation. Even so, Turing's test has gone essentially unsolved since 1952. The test is simple. In Volume LIX, Number 236 (October 1950) of Oxford University's MIND, a Quarterly Review of Psychology and Philosophy, Turing published a paper, Computing Machinery and Intelligence. While there were many important concepts in this document, one concept he put forth was what he called an "imitation game." There's a 2014 movie by that name, starring Sherlock's Benedict Cumberbatch. It's about Turing, and it's worth watching. The idea of the imitation game was that both a human and a computer would be communicated with by a second human, the "interrogator." The interrogator would send, essentially, text messages to the human and to the computer and get replies. If the interrogator could not tell which of the two respondents was the human and which was the computer, the computer was said to have passed the Turing test, where a computer could so fully imitate a human that a human couldn't tell the difference.


Data Science for Startups: Introduction


This series is intended for data scientists and analysts that want to move beyond the model training stage, and build data pipelines and data products that can be impactful for an organization. However, it could also be useful for other disciplines that want a better understanding of how to work with data scientists to run experiments and build data products. It is intended for readers with programming experience, and will include code examples primarily in R and Java. One of the first questions to ask when hiring a data scientist for your startup is how will data science improve our product? At Windfall Data, our product is data, and therefore the goal of data science aligns well with the goal of the company, to build the most accurate model for estimating net worth. At other organizations, such as a mobile gaming company, the answer may not be so direct, and data science may be more useful for understanding how to run the business rather than improve products. However, in these early stages it’s usually beneficial to start collecting data about customer behavior, so that you can improve products in the future.


Scaffolding Entity Framework Core with CatFactory

Code generation it's a common technique developers use to reduce time in code writing, I know the most programmers build a code generator in their professional lifes. EF 6.x had a wizard for code generation, that tool generates DbContext and POCOs but there isn't code for Fluent API, Repositories and other things like those; with .NET Core there is a command line tool for code generation but we have the same scenario, there is generation only for DbContext and Entities; with CatFactory we're looking for a simple way to generate code with enterprise patterns, please don't forget this is an alpha version of CatFactory, don't pretend to have in this date a full version of code generation engine. Why don't use code CodeDOM? CodeDOM it's a complex code generation engine, I don't saying CodeDOM sucks or something like that, but at this moment we're focus on generate code in the more simple way, maybe in the incoming versions we'll add integration with CodeDOM.


This malware is harvesting saved credentials in Chrome, Firefox browsers

screen-shot-2018-05-14-at-07-40-45.jpg
The new malware has a subset of the same functionality but has also been upgraded with an arsenal of expanded features, including a new network communication protocol and Firefox stealing functionality. Vega Stealer is also written in .NET and focuses on the theft of saved credentials and payment information in Google Chrome. These credentials include passwords, saved credit cards, profiles, and cookies. When the Firefox browser is in use, the malware harvests specific files -- "key3.db" "key4.db", "logins.json", and "cookies.sqlite" -- which store various passwords and keys. However, Vega Stealer does not wrap up there. The malware also takes a screenshot of the infected machine and scans for any files on the system ending in .doc, .docx, .txt, .rtf, .xls, .xlsx, or .pdf for exfiltration. According to the security researchers, the malware is currently being utilized to target businesses in marketing, advertising, public relations, retail, and manufacturing. The phishing campaign designed to propagate the malware, however, is not sophisticated.


Growing CDN services market changes to meet cloud needs

The basic purpose of a CDN is still the same. But cloud use, growing reliance on mobile devices and application developers' needs to optimize their platforms are driving demands for CDN services that boost network performance and scalability, according to Ted Chamberlin, research vice president of cloud service providers at Gartner. Enterprises need their websites to be as dynamic as possible, and now they're looking at other pain points and turning to their CDN providers for help, he said. "They're saying, 'What else?'" That "what else" includes services like web application firewalls, distributed denial-of-service (DDoS) protection, bot mitigation, streaming video and e-commerce optimization.  Most of this happens through cloud platforms. The use of cloud-based CDN services continues to grow because they improve capabilities of web applications and storage, Chamberlin said. "Cloud is spurring everybody to do more than static content."  The general consensus is CDN services are in for a period of big growth. MarketsandMarkets forecasts the CDN services market will grow from $7.5 billion in 2017 to $30 billion in 2022, as CDN providers focus on security, compression, video, web optimization and data duplication features.


ASP.NET Core - The Power of Simplicity


Microsoft decided to go all-in on Open Web Interface for .NET, or OWIN as it’s also known, and abstract away the webserver completely. This allows the framework, as well as its users, to completely ignore which server is responsible for accepting the incoming HTTP requests, and instead, focus on building the functionality that is needed. OWIN isn’t a new concept though. The OWIN specification has been around for quite a few years, and Microsoft has allowed developers to use it while running under IIS for almost as long, through an open source project called Project Katana. In reality, Microsoft hasn’t just allowed us developers to use it through Katana, it has been the foundation for all ASP.NET authentication functionality for several years. So, what is OWIN really? To be honest, it’s fairly simple! And the simplicity is actually the thing that makes it so great. It’s an interface that manages to abstract away the webserver using only a predefined delegate and a generic dictionary of string and object. So instead of having an event driven architecture where the webserver raises events that you can attach to, it defines a pipeline of so called middlewares.


The rise of outcome-driven software development

In theory, outcome-driven development is about investigating customer or end-user needs in order to work toward desired outcomes. As a business idea, the outcome-based methodology has been circulatingsince at least 2002, when a Harvard Business Review contributor outlined a multi-step outcome-based process for business growth, beginning with conducting outcome-focused customer interviews; registering and noting desired outcomes; organizing and rating those outcomes based on degrees of customer satisfaction; and finally harnessing desired outcomes to inform product design. But if the theoretical basis for outcome-driven development was laid nearly 20 years ago, it’s only in recent years that we’ve seen it take hold in industries like software development, where the traditional “Big Bang” software launch is quickly being supplanted by a model of continuous development and delivery. Rather than focus on perfecting a piece of software in time for a perfect launch, innovative development teams view software as a constant work-in-progress.


IoT and personal devices pose huge security risk to enterprises


While 88% of the IT leaders that responded to the survey believe their security policy is either effective or very effective, nearly a quarter of employees from the US and UK did not know if their organisation had a security policy. Of those that reported that their organisation did have a security policy for connected devices, 20% of UK respondents claimed they either rarely, or never, follow it. Only one-fifth of respondents in the US and UK reported that they followed it to the letter. While security policies and security awareness have their place, they also have their limitations, according to RBS CISO Chris Ulliott. Commenting specifically on cyber security awareness training programmes, he told attendees of CrestCon 2018 in London that security professionals need to realise the limitations of such programmes. Ulliott is among those information security professionals who believe that device manufacturers and service providers need to put more effort into making things secure by design so they are safe to use without any fear of security risk.



Quote for the day:


"Grounded leaders are present for others, operate with fortitude, and influence with the full impact of their vision and strength." - Catherine Robinson-Walker


Daily Tech Digest - May 13, 2018

Routing Innovations for the Cloud Era

sachin2.png
Modern cloud grade routing architectures improve network economics by increasing network utilization and service availability. They offer end-to-end entropy friendly traffic load balancing - from multi-homed service edges to much simpler ECMP friendly SPRING and IP fabric cores. Traffic load balancing across all available paths improves network utilization and simplifies network capacity planning by easy scale out, without requiring traffic re-engineering. Additionally, multi-pathing architectures improve service availability and reduce failure domains since traffic can reroute to alternate path within milliseconds of a failure. Even better, multi-pathing architectures improve capital efficiency and network economics by allowing operators to run their networks ‘hotter,’ without compromising service SLAs. ... Ultimately, the great advantage of cloud grade networking is architectural simplicity that improves service agility and efficiency. With Juniper, deploying IP fabrics, EVPN, SPRING, RIFT and the Northstar Controller complement current network operations and architectures, and provide a graceful network transformation to modern, cloud era architectures.


Where Bank of America uses AI, and where its worries lie

“There's a chance AI models will be biased,” said Caroline Arnold, BofA's head of enterprise technology (which includes HR tech). “You might say, who's going to be successful at this company? An AI engine could find that people who golf are going to be successful at the company. On the other hand, using those same techniques can remove bias if you have the model ignore some of these things that are indicators of different groups but go on to the meat of the profile of the person and understand it in a deeper way.” Arnold believes an AI engine can never be the final say in who gets hired. Mehul Patel, CEO of Hired, a technology company whose software uses AI to match people to jobs, agreed that AI and humans have biases. “The good news about AI is, you can fix the bias,” he said. “We will boost underrepresented groups. The trouble with humans is they can't unwire their bias easily. Human bias far outweighs algorithmic bias. That's because we humans make quick decisions on people that aren't founded on what you're looking for in the job.”


Can blockchain technology live up to the hype? Barclays analysts say no


“It is high time to end the hype. Bitcoin is a slow, energy-inefficient dinosaur that will never be able to process transactions as quickly or inexpensively as an Excel spreadsheet,” wrote Nouriel Roubini, economist and cryptocurrency skeptic, in a recent Project Syndicate column he co-wrote called “The Blockchain Pipe Dream.” Of course, the advocates of blockchain are as ardently optimistic about what the technology can do, comparing blockchain with the early days of the internet. “It’s easy to compare blockchain with the internet due to the surrounding attention and the amount of money being poured into the respective spaces, but this only gives me more confidence that the technology will prevail long term,” said James Tabor, CEO of Media Protocol in an email to MarketWatch. “In the same vein as the internet made the flow of communication seamless and information readily available, blockchains can dismantle the centralized powers that have caused so much pain across all industries,” Tabor said.


Three elements drive interest in regulatory tech

According to a Juniper Research report, spending on regtech will grow by an average of 48% per annum over the next five years, rising from $10.6 billion in 2017 to $76.3 billion in 2022, as banks and financial services firms seek to avoid costly regulatory fines. Brennan Wright, head of marketing at identity verification and compliance company ThisIsMe, says the current staffing component dedicated to regulatory compliance within financial services organisations will fall to 1% to 2% by 2025, as new regtechs are introduced. "Technologies such as risk data aggregation and reporting tools, fraud detection tools and client onboarding systems will continue to empower compliance teams in the short term and will eventually replace many back-office positions; especially those mundane and admin-intensive roles. "The theme of change will favour legal and compliance teams that are technically savvy, have the necessary creative foresight and an ability to leverage the rapid innovation necessary to keep costs down, systems running smoothly and regulation in check," Wright points out.


The Law of Blockchain: Beyond Government Control?

In the case of blockchain, it’s still early days and Blockchain and the Law reflects that. It contains little in the way of case law (blockchain disputes are only now coming before judges), and the authors, Primavera De Filippi and Aaron Wright, spend considerable time explaining just how blockchains work. Namely, they emphasize how blockchain software creates permanent ledgers that are distributed across multiple computers and are mostly beyond the reach of central authorities. The upshot is what the authors call “lex cryptographica” or a system of rules where autonomous, decentralized code — rather than legislators or judges — determine the outcome of given interactions and disputes. This has the potential to bring dramatic changes in fields like corporate and insurance law. For instance, a blockchain can distribute dividends to shareholders according to pre-coded smart contracts. Or, in the event of an earthquake, an insurer’s blockchain can consult a third-party server (known as an “oracle” in blockchain parlance) to obtain seismic information and arrange payouts.


Connect the Dots: IoT Security Risks in an Increasingly Connected World

A woman using a digital tablet to control a smart home system: IoT
For organizations deploying IoT technology, it’s crucial to establish an incident response team to remediate vulnerabilities and disclose data breaches to the public. All devices should be capable of receiving remote updates to minimize the potential for threat actors to exploit outlying weaknesses to steal data. In addition, security leaders must invest in reliable data protection and storage solutions to protect users’ privacy and sensitive enterprise assets. This is especially critical given the increasing need to align with data privacy laws, many of which impose steep fines for noncompliance. Because some regulations afford users the right to demand the erasure of their personal information, this capability must be built into all IoT devices that collect user data. Organizations must also establish policies to define how data is collected, consumed and retained in the IT environment. To ensure the ongoing integrity of IoT deployments, security teams should conduct regular gap analyses to monitor the data generated by connected devices. This analysis should include both flow- and packet-based anomaly detection.


Making The Case For Hybrid Cloud

Enterprises have a complicated relationship with the cloud. Infrastructure-as-a-service (IaaS) offerings from public-cloud providers offer appealing alternatives to acquiring and provisioning on-premises hardware. And line-of-business organizations love being able to subscribe to software-as-a-service (SaaS) offerings that bypass IT altogether. But application development and deployment teams—the people the company charges with leading the digital transformation—have to work harder to gain the benefits cloud computing promises. And clouds add new facets to IT environments already struggling under the weight of too much of a good thing. But now, hybrid clouds—private, on-premises clouds linked to public clouds with data and applications shared among them—promise to take the enterprise’s love affair with cloud computing to a new level. Descriptions of the cloud’s role in enterprise computing vary widely with who’s doing the describing. Public-cloud providers see almost all enterprise workloads moving to, yes, public clouds. To enable that transition, they’ve shored up their offerings with heightened security features. They offer service-level agreements covering availability and performance.


Connecting Enterprise IT Models to Institutional Missions and Goals

There is no doubt that replacing an ERP system requires a significant up-front investment. We needed a way to assess the cost of continuing operations with our existing ERP system against the cost of implementation and support for a replacement. To build these cost and value estimates, we worked closely with many IT teams including application support, infrastructure, data management, and client services to build a return on investment (ROI) model. In addition to licensing and maintenance costs, we looked at ongoing on-premises costs to support infrastructure, backups, and disaster recovery. We included the costs of satellite systems, such as the staff, faculty, and student portal, that we had developed over the years to improve the user experience. Finally, we factored in the cost to rewrite custom-developed modules if we stayed on the existing system. We ended up with a financial model that evaluated the 10-year costs of staying with our current ERP system against costs incurred in the implementation and support of a replacement.


Open Reference Architecture for Security and Privacy Documentation

Privacy is getting more and more important. New technologies make our lives better but put our freedom and privacy under pressure. Terrorist and (cyber) criminals can be more easily detected by analyzing large amounts of data. Also ‘diseases’ can be better cured using more data of more people. Currently great improvements come at a large price: Big data analytics systems are going over your user data and user data traces (e.g. mouse movements in web pages, location data) multiple times a day. Companies know better what you need, think and eat tomorrow than you. Your location is continuously being tracked, due to all the communication devices you use. Using public transport cannot be done anonymously anymore while cars are full of track and tracing technology. When privacy is designed first just as security we should have less concern on security and privacy hacks. Also if more IT designs are open and published under an open license chances of mistakes in architecture and design will be less. Partly due to pressure of openness but also since more experts can contribute to lower security and privacy risks concerned with public or private systems.


The Multiplier Effect of Collaboration for Security Operations

Today, state, local and federal agencies are much better equipped to collaborate and coordinate response with real-time situational awareness and actionable situational intelligence.  We’re experiencing a similar evolution in the world of cybersecurity. For years, we’ve relied on a defense-in-depth approach to security where each team uses different point products from different vendors to protect valuable digital assets and systems. The problem is that these disparate technologies don’t interoperate, and each has its own intelligence, making it extremely difficult for tools and teams to share intelligence, collaborate and coordinate response. When security teams are dispersed all over the world, the challenge is even greater. This is where a threat intelligence platform comes into play. It can serve as the glue to integrate these disparate technologies. Automatically exporting and distributing key intelligence across the many different layers of your defense-in-depth architecture, it offers your different security teams access, as part of their workflow, to the threat intelligence they need to improve security posture and reduce the window of exposure and breach.



Quote for the day:


"You can't save time. You can only spend it, but you can spend it wisely or foolishly." -- Benjamin Hoff


Daily Tech Digest - May 12, 2018

Boston Dynamics' SpotMini robot dog goes on sale in 2019


Who'll buy it? Probably not you, at least to start.  Raibert didn't reveal price plans, but said the SpotMini robots could be useful for security patrols or for helping construction companies keep tabs on what's happening at building sites. SpotMini can be customized with attachments and extra software for particular jobs, he said. Eventually, though, the company hopes to sell it for use in people's homes. "Most places have something where wheels don't get you everywhere," Raibert said. "We think SpotMini can go to a much larger fraction of places." Boston Dynamics is among the highest-profile robot companies out there. It made a bang with its gas-powered Big Dog quadruped, which could navigate challenging terrain while keeping its balance. Later, the company unveiled Atlas, a humanoid robot that can do flips, pick up boxes and can now run. SpotMini, whose development began while Boston Dynamics was a Google subsidiary, is remarkable for being cute, as well as fascinating to watch. That's pretty valuable given how leery a lot of us are about our future robot overlords.



Back to the Future: Demystifying Hindsight Bias


When using the original dataset, Information about the target label crept into the training data. Boat and body are only known in the future after the event has already occurred. They are not known in the present when making the prediction. If we train the model with such data, it will perform poorly in the present, as that piece of information would not legitimately be available. This problem is known formally as hindsight bias. And, it is predominant in real-world data, which we’ve witnessed first-hand while building predictive applications at Salesforce Einstein. Here is an actual example in the context of predicting sales lead conversion: the data had a field called deal value which was populated intermittently when a lead was converted or was close to being converted (similar to the Boat and Body fields in the Titanic story). In layman terms, it is like Marty McFly (from Back to the Future) traveling to the future, getting his hands on the Sports Almanac, and using it to bet on the games of the present. Since time travel is still a few years away, hindsight bias is a serious problem today.


Cloud-Based Product Lifecycle Management Market is touching new levels

HTF MI recently introduced new title on “Global Cloud-Based Product Lifecycle Management Market Size, Status and Forecast 2025” from its database. The report provides study with in-depth overview, describing about the Product / Industry Scope and elaborates market outlook and status to 2025. The Report gives you competition analysis of top manufacturer with sales volume, price, revenue (Million USD) and market share, the top players including Dassault Systemes, Siemens AG, PTC Inc, Oracle Corporation, SAP SE, Autodesk, Inc, Arena Solutions, Aras, Infor & Accenture PLC. In this report Global Cloud-Based Product Lifecycle Management market classified on the basis of product, end-user, and geographical regions. The report includes in-depth data related to revenue generation region wise and major market players in the Cloud-Based Product Lifecycle Management market.


The future for service – will you focus on AI, voice or search?


Sadly, service delivery today is anything but routine, predictable or scalable. Take a new application built in the cloud – an issue with the cloud provider could lead to all customers being locked out of their service. With each and every customer suddenly needing assistance, scaling up to cope with the problem is difficult; diagnosing the issue with a supplier is also tricky. Coping with a bigger problem and automating responses where possible is therefore necessary. In the State of the Service Desk Report, 13,000 service desk teams provided their insights into what is working and what is needed to cope in future. Around 69 per cent of front line responders stated that they spent too much time firefighting, rather than being able to plan ahead through better problem management. Similarly, around a quarter pointed to increased automation as essential for their efficiency. Yet each company will have to look at its own approach to automation – there is no one size fits all solution. There are a number of new options that service teams can take to evolve their approach – voice, AI and search. 


How to Achieve Sustainable Employee Engagement in Healthcare

Enabling employees to do meaningful work is critical to employee engagement, and requires a consistent feedback loop and the right systems and processes to support them. Technology can be a powerful accelerant that offloads mundane tasks and allows employees to apply their skills and expertise to the things that technology can’t do—innately human things that require empathy, connectivity, communications, and influence. Unfortunately, many healthcare organizations are still operating on legacy systems and their employees are bogged down by slow technology that prevents them from fully engaging in their jobs. These employees end up spending significant time working on things that they weren’t hired to do such as piecing together and fact-checking spreadsheets and reports—activities that they should be able to do within the technology. The right technology will allow your workforce to do their best work by making what encompasses their role more automated, manageable, and efficient. And as regulations and patient expectations continue to change, the systems you choose should be agile enough to change with your organization’s needs. 


Three Fintechs leading Open banking initiatives in the UK

Digital-Banking-Open-Banking-and-APIs-a-Trend-to-Watch-Closely-1440x564_c
As the world starts warming up to the Open banking culture, there is always going to be this tug of war between control and agility. As regulators tune their policies around data sharing and open banking, they will have to make decisions on how much control Financial services firms have over customer data. At the same time it is also critical to work towards an agile open banking framework within a controlled and secure data sharing ecosystem that takes care of customers’ interests. UK, like in most other aspects of Fintech, has been spearheading open banking in policy and execution, but it would be myopic to assume that open banking starts and ends in the UK. I have touched upon different regulatory approaches to open banking and customer data sharing across the globe in my previous posts. Today, I focus on three of my favourite Fintechs in the UK that are regulated to provide open banking services. ... These players and a few others not only add efficiencies for their business through open banking APIs and data analytics, but also create opportunities for businesses partnering with them.


The ethics lessons will continue until morality improves

So, why didn't Build start with that? For exactly the same reason that reactions to Google Duplex has been so divided: Because technology powered by AI has the potential to make our lives far, far better -- or far, far more unbearable. Microsoft showed a meeting room camera system that recognised people walking into the room, greeted them by name, and transcribed every word they said -- even if their deafness made them a little harder to understand. That deaf team member could join in at an equal level with everyone else, and so could remote colleagues. Everyone got a list of what they had said they were going to do, delivered to their to-do lists. Empowering and convenient -- exactly the kind of system the $25 million AI for Accessibility grant programme Nadella announced is there to create. The same system in a railway station in a country with an authoritarian government, or even left on in an HR meeting room where someone is trying to report an abusive boss, would be deeply worrying. Google showed its Duplex assistant phoning a restaurant and sounding enough like a human to be treated like a real customer.


The hybrid cloud provides a best of both worlds solution

Hybrid the best of both worlds
The direction that cloud services and cloud providers are heading in at the moment can quite accurately be described as two major points. Cloud providers seem to be focusing primarily on, number one: expanding their infrastructure and make it available in a number of different geographical locations, and number two: ensuring a variety of options and services be available for their users including IaaS and Paas layers so they are not turned away. One may raise the question of cloud providers not as actively working on creating security solutions but it is negated by the shared responsibility model currently adopted by them which envisions cloud security to be both, the provider and the user’s responsibility, equally. This is why a hybrid cloud system seems to be the ideal solution as it allows enterprises to remain on top of the tech race with the cloud yet be able to retain critical work on-premise to ensure its utter security. Despite a great number of entrants finding a haven in the cloud and data centre technologies, a proper and flexible security solution for the hybrid cloud systems still remains to be formulated.


Coaching with Curiosity Using Clean Language and Agile


The Clean for Teams training is all about getting the team to be curious and supportive of each other using Clean Questions. It works wonders as long as people use no more than three questions in a row at a given time, keeping it light and not going as deep as you might in professional performance coaching or therapy.  In a recent workshop I gave, two colleagues were pairing up to practice the questions that they had just learned. They decided to use as a topic a discussion they had had the prior day at work. During the debrief, one commented that the trajectory of the conversation had been richer and more revealing than had been the conversation the day before. They used only a few questions and had had only 15 minutes of exposure to Clean Language. So yes, it is possible with the right guidance to put it to use in your everyday work, whether in a coaching relationship or not. You will experience an improvement in the way people relate to you and you to them, which is one of the outcomes of good coaching. The conditions for peer-to-peer coaching include having a space to listen, and a technique to separate out your own thinking so that you can stay within the mental model of the person you are listening to.


Understand Microservices Monitoring


The ultimate goal, of course, is for processes, errors, and bottlenecks to be managed in ways that are totally transparent to end users, as microservices-based platforms fix themselves with the help of microservices analytics. In the event of a bottleneck, for example, an end-use customer who tries to buy a widget or service on the Web would ideally never receive an error message that might prompt the user to “try again later.” Developing microservices orchestrations and associated analytics capabilities are easier said than done in-house, of course. To that end, third-parties have emerged with solutions and services for those organizations that lack resources to develop the architectures in-house.“Microservices are moving toward mainstream use today and often show many integration points with existing monolithic enterprise applications,” Torsten Volk, an analyst for Enterprise Management Associates (EMA), said. “Meanwhile, vendors of DevOps-centric application and infrastructure analytics software are stepping up to monitoring this often complex and dynamic world of applications consisting of shared services with often disconnected release schedules.”



Quote for the day:


"To make a decision, all you need is authority. To make a good decision, you also need knowledge, experience, and insight." -- Denise Moreland