Daily Tech Digest - June 18, 2018

The right kind of AI: why intelligence must be augmented, not artificial

null
Einstein is a layer within the Salesforce platform that helps users to make best use of their data, delivering insight that allows them to truly focus on their customers. It does so by utilising the computing power of AI, a technology at the heart of everything Salesforce is trying to achieve.  Like many observers, Salesforce believes that AI is set to be the dominant technology of the next decade and that understanding customers is best achieved by AI. It allows users to address a number challenges, such as: learning from the data coming into the organisation; improving sales engagement; being proactive in customer service problem solving; becoming more predictive in addressing issues before they become a real problem.  Salesforce Ventures has announced a new $50 million fund to encourage startups to build AI-fuelled applications on top of Salesforce. This overall change of focus is reflected in the apps that are proving most popular within AppExchange.  AI’s ability to automate certain tasks and augment any number of others, and to bring enormous insight based on big data is behind this rise in AI-based apps.


Effective application security takes a team effort

When it comes to application security, the DevOps team have the hardest job of all. Actionable vulnerability data is rarely available during actual development cycles, meaning many security flaws only surface once an application has already gone live. Furthermore, due to time constraints imposed by senior leadership, DevOps teams are often confined to conducting security assessments at the last minute, just prior to release, which is far too late in the day to be effective. DevOps teams need to work closely with security professionals and senior leadership to build security into the entire development lifecycle. Moving to a continuous integration process can help with this, as can the use of both dynamic scanning and source scanning throughout the development and implementation phases. It’s also the role of DevOps to demonstrate to senior leadership that a slightly longer development phase is far more preferable to repeating the entire process multiple times due to vulnerabilities only being discovered after release. However, this is only possible if both DevOps and security professionals can communicate effectively up the chain of command, without fear.


Myth-buster: the real costs of Public and Private Cloud

Myth-buster: the real costs of Public and Private Cloud
Private cloud infrastructure is generally perceived as costly due to the consultative element and ongoing management costs. Despite public cloud seeming the far more cost-effective option on the surface, there are some hidden costs attached. For example, there is a hidden charge attached to the cost of moving data traffic between the various physical and virtual machines used by the public cloud. Public cloud providers generally charge an additional 20% on top of the fees charged by the platform providers themselves. Another hidden cost of public cloud is the background management and maintenance services, which are of course necessary for any type of cloud infrastructure. Finally, the question of reversibility is key. When a firm migrates its infrastructure from one cloud to another or to an internal architecture, the costs involved are often underestimated. Once all operational developments have been tailored specifically for a public cloud, the cost of migrating away from that can quickly become expensive! And that’s not even including the migration costs involved when transferring data to an external platform, which can also be high.


RegTech: The future of market abuse surveillance

Despite the risks, using independent solutions for different forms of data is currently the norm. In fact, 70% of respondents to a PwC market abuse surveillance survey2 are using three or more software vendors to execute their surveillance requirements and 75% are unable to review trade alerts alongside contemporaneous electronic communications or voice alerts. Further, alerts generated by multiple systems are typically reviewed manually by separate compliance teams that do not have easy access to each other’s information. Such dispersion impedes firms from having a true 360° view of employee behavior and limits the ability to stay ahead of emerging risks. Adding to the problem, data volumes and sources have also been increasing as the methods that traders use to communicate on a daily basis – from cell phones to chat apps to social media – continue to diversify. Communications surveillance also typically uses lexicon-based search techniques, which tend to produce high volumes of false positives and potentially miss true suspicious behavior. Finally, there are challenges associated with high volumes of false positives,3 some stemming from legacy systems and scenarios, which may not be calibrated with the current business landscape and risks.


Turn on the lights! The key challenges for businesses transformation

tiny figures move toward a giant lit bulb amid a line of burned-out lightbulbs
Waste can simply be defined in terms of its opposite – value. While costs are inevitable for all businesses, waste is optional. When a company is creating value and nothing but value, there is no waste. However, no system is that efficient and there will always be some waste which is uneconomical to remove. But a large percentage of organisational waste, anything from 50% to 70% - based on most studies- when removed, provides a healthy return on investment while contributing to competitiveness. And this is why we turn the lights on. And while those lights are on, it is important that all can see the mess for what it is. It will almost certainly be a lot bigger than anyone had imagined, and leaders need to be prepared for that. They also need to be prepared to forgive, because if they don’t, the waste will simply go back underground. Keeping the lights on means focusing not on the people, and not on the waste, but rather on the causes of the waste. If time and resources are spent only on cleaning up the mess, things will get dirty again very soon. The endgame here is to understand and deal with the institutional practices and structures that are endemic to the creation of institutionalised waste.


The digital transformation and the importance of humans

Change always creates stress and uncertainty for us as human beings. In my day-to-day work at Siemens, I often notice, however, that many people are generally open to change as such. In fact, employees often want things to change. As a rule, the difficulties arise as soon as they have to try out new things and implement concrete changes themselves. Then I often hear statements like: “I don’t even know where to begin.” Or: “I have so much to do and no time for anything else.” And that’s exactly where the problem is: we have to understand that change isn’t “deferrable,” let alone a phase that ends at some point. We can’t cut ourselves off from new developments, nor can we reduce the speed at which changes occur. To keep pace, we’ve got to adapt and move faster – as people, as a company and as a society. We’ve got to be open for new things and leverage digitalization and its opportunities in such a way that they help us increase the quality of life and benefit companies as a whole. To accomplish this goal, we have to do some things differently than we have in the past. And this shift can’t happen without a culture change.


Cisco makes SD-WAN integration a top priority

Cisco makes SD-WAN integration a top priority
“The branch is a very complicated part of the network when you think about it because when you think about a global business where you’ve got all the different types of interconnect you have around the world and you’re trying to manage all that. That part of the network is going to a software-defined WAN, and it’s an area we’ve been investing in heavily,” said David Goeckeler, executive vice president and general manager of networking and security at Cisco, in an interview with Network World.  “We had an iWAN solution. We have an SD-WAN solution from Meraki, and then we purchased Viptela because they had innovated on the cloud side of it, and we wanted to marry that up with the [Integrated Services Router] franchise that we had in iWAN, and we are well down the path of that integration. And I think we’re seeing big projects move forward now in the SD-WAN space. It’s a market that had been kind of stalled because I think customers were trying to figure out what to do,” he said. Other Cisco executives reiterated the importance of getting Viptela further integrated into the company’s networking portfolio. “One of the important parts of what Viptela brings is an easy way to implement really strong end-to-end segmentation that lets users build and secure different segments of their networks,” said Scott Harrell


Does Cyber Insurance Make Us More (Or Less) Secure?

insurance policy
Cyber insurance policies can be complex and tricky to understand, and anxious C-suite executives are buying cyber insurance often without understanding the full extent of what policies cover and what they don't. To grow the market and diversify the risk, insurance companies are taking on all comers, often with no adequate measure of the true risk any given insured enterprise faces. Both insurance carriers and enterprise buyers of cyber insurance are groping their way forward in the dark, a potentially dangerous scenario. Most insurance carriers, however, are aware of this blind spot, and researching how to better measure and quantify cyber risk. Measuring cyber risk is very different than in other domains. If you want to rate the risk of an earthquake or a hurricane, the actuarial science is sound. A data center in a hundred-year flood plain can expect a catastrophic flood once in a hundred years. Cyber risk, on the other hand, remains far harder to quantify — a problem, it must be noted, the insurance business is working hard to solve.


How to know when data is 'right' for its purpose

There are certainly scenarios where IT can answer the “right data” question with a confident yes or no and with only the most minor qualification. That is with metrics and calculations because there is always a right answer when math is involved. The qualification would be that IT has the correct definition and of course the underlying data has been populated consistently. Another option of working through this challenge is to clarify the expectation of the business user. Asking a few more questions to ascertain the true need and the reason behind the question can help frame the answer tremendously. Is the question based on previous instances of “bad” data? Again, “bad” data is relative and is always from the perspective of the business user. If so, then framing the response to highlight improvements in the consistency and validation of the source data may reassure and meet the users’ needs. Maybe the question is related to reference data that had not previously been governed or monitored. If so, walking through the steps taken to evaluate validity against a set of expected results will start building the confidence in the final product.


Default Interface Methods in C# 8


The main benefit that default methods bring is that now it is possible to add a new default method to an existing interface without breaking the classes that implement that interface. In other words, this feature makes it optional for implementers to override the method or not. An excellent scenario for this feature is the logging example that is described below; the ILogger interface has one abstract WriteLogCore method. All of the other methods, like WriteError and WriteInformation, are default methods that call the WriteLogCore method with a different configuration. The ILogger implementer has to implement only the WriteLogCore method. Think of how many lines of code that you have saved in each inherited class of the logger type. While this feature can be a great thing, there are some dangers as it is a form of multiple inheritances. Hence it suffers from the Diamond Problem, which is described below. Also, the interface methods must be "pure behavior" without the state; that means the interfaces, still, as in the past, cannot directly reference a field.



Quote for the day:


"If you're not failing once in a while, it probably means you're not stretching yourself." -- Lewis Pugh


Daily Tech Digest - June 17, 2018

Making Blockchain Enterprise Ready

Cisco Blockchain Platform
Today, however, many enterprise blockchains are running in the cloud and often in a single datacenter. This goes against the philosophy that blockchains should be fully decentralized and opens up security risks associated with managing data in a single location. Cisco believes the next generation of enterprise blockchain systems should be fully decentralized and offer hybrid deployment models, where blockchain nodes can be on-premise behind firewall systems or use public cloud nodes or some combination of the two. In addition, business leaders need to think holistically about end-to-end security. A common misconception is that the distributed nature of blockchains makes them inherently secure, and in many ways, they are. However, without the correct design measures, they can be prone to multiple threats, for example through the exploitation of infrastructure-level vulnerabilities. Our approach involves end-to-end security architecture, leveraging analytics that spans the infrastructure layer through to the application layer.


Untitled-16
What has changed in recent years in the promise of AI is the coming together of very advanced computing power with highly sophisticated algorithms and networks that collaborate to recognize patterns, discern meaning from vast sets of data and train themselves to get better at these tasks.  For the enthusiasts, the big question in AI might be when we will achieve machine super-intelligence, or Singularity, the point at which machine intelligence explodes to vastly surpass human intelligence. It is chillingly described as the last invention humans will make. But for a country at India’s level of socio-economic development, the suite of AI technologies can be applied effectively ... The NITI Aayog report proposes an umbrella organisation to shape and implement India’s AI policies—from stitching global partnerships to picking specialised teams to pursue audacious moonshot projects. It identifies five focus areas and a two-tiered institutional structure - government-led CORE or Centre of Research Excellence to focus on core research and private sector-led ICTAI or International Centers of Transformational AI to focus on application-based AI research.


Communicating Architecture with Stakeholders

Because architects are often visually oriented, we tend to show the fruit of our efforts by means of diagrams. But this is not always the best option; it mainly works for 1) designing and 2) visually oriented people. In management, we often find people with a legal background. They are ill-served by diagrams and other pictures, but instead require text, while those with a background in finance need numbers, tables, and charts. Speaking their “language” is an important part of effecting the change you want to see.  The information contained in architecture models can be displayed in many forms – not just diagrams. In particular, the various dependencies between the elements of your architecture can be exploited to create cross-cutting insights that serve the needs of various stakeholder groups. Displaying this information in multiple ways may help different types of stakeholders gain the understanding they need to move forward.  Below, you see the technology obsolescence risk of (part of) the business capabilities of an organization. The end-of-life dates of various technologies, as provided by Technopedia, are aggregated across the entire architecture via, in this instance, applications and business processes supporting each capability.


Setting Up Ethereum Blockchain On Azure

Blockchain
This article will focus on working with Ethereum on the Microsoft Azure platform. Azure is a cloud platform from Microsoft and could be used for other beneficial purposes as well. We’ll use consortium from Azure. A consortium in Ethereum’s context means a collaboration between supporting members of a blockchain with underlying infrastructure. Consortium helps us configure the block chain and its controlling parts on Azure. We create a Genesis block and a default account with initial Ether allocations. Setting up consortium here would be a private blockchain and would not be part pf public Ethereum blockchain that means Ethers from private blockchain could not be transferred or exchanged directly with Ether from public blockchains. Azure helps us to set up the baseline blockchain infrastructure by providing the templates. We’ll first set up a blockchain consortium on Azure and Azure will create a genesis block for us. If one does not have a paid Azure account, one could leverage Azure’s new account’s benefits of giving $200 credits. That means if you are new to Azure and wants to play around with its free trial, you’ll get $200 credits that you can use to explore Azure. If you are new to Azure and do not have an account, follow following process, else directly login to your portal.


Drivers Of Crypto Disruption: Will Public Or Private Blockchains Lead The Way Forward?

Companies may have good reasons for wanting to avoid implementing public blockchain infrastructure. Chief among their concerns would be that they want to keep their information and transactions private. Firms need to protect their intellectual property and to safeguard critical pieces of information. A private blockchain can function like its public counterparts with the exception of a permissioning system where only authorized users could interact with the blockchain. External parties such as accountants, auditors, and regulators could be brought into the structure in order to monitor business activities in real time. As such, blockchain creates the possibility for what many are calling a triple entry accounting system. Transactions are not just recorded as debits and credits on separate books when companies do business with one another, but the transactions can be easily verified on a blockchain as well. This has the potential to eliminate some of the chief mechanisms for fraud even if it will prove challenging to eliminate fraud completely.


2018 Banking Industry Outlook Six themes driving banking industry trends

Figure 1: Six macro themes and five banking businesses
For banks globally, 2018 could be a pivotal year in accelerating the transformation into more strategically focused, technologically modern, and operationally agile institutions, so that they may remain dominant in a rapidly evolving ecosystem. This metamorphosis is far from easy as most banks grapple with multiple challenges: complex and diverging regulations, legacy systems, disruptive models and technologies, new competitors, and, last but not least, an often restive customer base with ever-higher expectations. ... Although banking has undoubtedly improved in many ways in the last couple of decades, most organizations have not gone through the customer-centric transformation that other industries have undergone. With widespread digital disruption, banks may even risk losing control over customer experience. Of course many banks, global and local, large and small, have changed their market and customer strategies since the financial crisis. Many of these decisions may have been forced upon them by regulatory expectations, and perhaps are not necessarily grounded in a refined understanding of markets and customers.


Image Based Risk Assessment In Cancer

Even before machine learning and deep learning, image analysis software was available that could identify edges, boundaries, geometric shapes, and density distributions, as well as texture analysis, as aids for the pathologist. Subsequently, machine learning of various types such as random forests and support vector machines were able to learn from the features and identify lesions. With the advent of deep learning with multiple connected processing layers, it was no longer necessary to extract such features from an image before using those features as representations of the image to input into the algorithm. The most widely used deep learning model is the convolutional neural network (CNN). A CNN can do feature extraction automatically and analyse that information directly. All that a CNN needs is a (suitably labelled) training set of the images themselves. Rather than working with the (large number) of pixels in the image directly, the initial layers of the CNN will generate increasingly complex representations of those images which, though much smaller than the original, capture important features that the rest of the network uses for analysis and classification.


Risk Management and Business Models on NEO Blockchain


The original blockchain architecture for Bitcoin is not built for Turing complete Smart Contracts; we need another architecture for large scale decentralized applications that fulfill the requirements from business and economic use cases. As such, PoW and PoS are both unsuitable. To meet the requirements, NEO Blockchain is built on delegated Byzantine Fault Tolerance (dBFT), an adapted BFT algorithm by Erik Zhang. The advantage with dBFT is that it cannot branch. There is always one and only one block proposed at any single time. ... The transaction is completed directly after it is included in a block, in other words in one single confirmation. It also means that the transaction time is actually completely equal to block time; if block time is 5 seconds, then the transaction is received in 5 seconds. Moreover, there is no risk management needed when sending and receiving large volume and high value transactions. With no branching during operation, all transactions are 100% complete after they are included in a block, so no risk analysis on finality has to be made.


Democratizing Stream Processing with Apache Kafka & KSQL


To show the value of stream processing, let’s pick a motivating example that is applicable across many different industries. Imagine that you need to create and maintain, in real-time, 360-degree profiles for your customers. This would be useful for many reasons such as: To create a better customer experience. For instance, "This premium customer tried several times in the past five minutes to checkout his shopping cart but failed because our latest website update was buggy, let’s offer him a discount immediately and apologize for the bad user experience."; and To minimize risk. For instance, "This new payment seems fraudulent because the payment originated outside the US, yet the customer’s mobile phone app reports she is in New York City. Let’s block the payment and contact the customer immediately." What is needed for this use case is to bring together, in real-time, data from a variety of internal and possibly also external channels, and then consolidate this information in comprehensive customer profiles (sometimes called customer 360-degree profiles) that are continuously updated as soon as new information is available in any of the channels.


DARL and Whitebox machine learning

The most common is supervised learning. This is where you have collected multiple examples of the inputs and outputs of some process you want to learn, recorded in a database, or some code representation like XML or Json, and the machine learning algorithm tries to create a model that reproduces the outputs when presented with the inputs.  If you remember from the previous article, DARL inputs and outputs can be Textual, Categorical, Numeric or Temporal. Machine learning here is limited to categorical and numeric inputs and outputs. Learning is limited to a single output at a time. If that output is categorical then classification has been performed, if numeric then prediction.  The data used to train the data is a training set, and some of the data may be put aside to form a test set. with this machine learning algorithm you specify the percentage of the data to train on and the system will randomly split the data into two groups. Although problems that have an existing analytic solution are sometimes used to test ML algorithms, for instance getting a model to copy some logical relationship, in the real world no one in their right mind would use a machine learning algorithm to learn something for which an analytic model



Quote for the day:


"Obstacles can't stop you. Problems can't stop you. Most of all, other people can't stop you. Only you can stop you." -- Author, Jeffrey Gitomer


Daily Tech Digest - June 16, 2018

Data Lake – the evolution of data processing


Two things emerge from this - structuring and transforming data on ingestion incurs a performance hit, and potential data loss. If we try to do complex computations on a large amount of incoming data, we will most likely have serious performance issues. If we try to structure data on ingestion, we might realize later on that we need pieces of data discarded during structuring. The thing is, with vast and complex data, it is most likely that we won’t know what insights you can extract from it. We don’t know what value, if any, collected data will bring to our business. If we try to guess, there is a fair chance of guessing wrong. What do we do then? We store raw data. Now, we don’t want to just throw it in there, as that will lead to a data swamp - a pool of stale data, without any information on what it represents. Data should be enriched with metadata, describing its origin, ingestion time, etc. We can also partition data on ingestion, which makes processing more efficient later on. If we don’t get the right partitioning on the first try, we’ll still have all of it, and can re-partition it without any data loss.



The Menace and the Promise of Autonomous Vehicles


The Drive.ai program lacks the ambition of, say, Waymo’s Phoenix-area AV service, which ferries passengers around without a human driver ready to take the wheel, but the project—publicly announced, small in scope, conducted in partnership with city officials—seems to take a more measured approach to AV testing than exists elsewhere. Bamonte described Frisco’s AV program as “kind of crawl, walk, run.” “We don’t want developers to just plop down unannounced and start doing a service,” Bamonte told me. He compared the Drive.ai testing favorably to Tesla’s, whose roadster has ambitious autopilot features that have already been deployed in thousands of commercial vehicles, wherever drivers take them. So far, Tesla’s autopilot mode has caused several high-profile crashes on public roads, including fatal accidents in Florida and California. The Uber crash has “added note of caution,” Bamonte said, but “it’s our responsibility to continue to explore and test this technology in a responsible way.” For him, that means closed tracks and computer simulations; after conducting a public education campaign and soliciting feedback, deployment on public streets will inevitably follow.


Cyber threat intelligence versus business risk intelligence

"Alongside the convergence of activities and systems, with IoT there's all sorts of expansion, the perimeter also disappears," says Gartner's Contu. With this, business risk is fast becoming the responsibility of the whole organisation, not just a small dedicated section of the organisation. "Organisations need to take a business-driven security approach, which encourages all stakeholders to be engaged in the risk conversation, identifying what matters most to them, so threats can be tackled in a way that safeguards what's most important -- whether that's customer data, intellectual property or another business-critical asset," said Knowles. IT, security, application builders, developers, DevOps operations and more: all of these parts of the organisation need to be thinking about business risk on a day-to-day basis -- and what they need to think about is constantly changing. "That's a critical part of thinking about a risk-based model: it's not static, it's not something you have consultants looking at; it should be instrumented and refined over time and changing depending on what you see," said Toubba, who adds how information on cyber threats should also be continually updated in this way.


What to expect from tomorrow’s smart spaces


Privacy concerns remain; employees will want to avoid mingling their personal lives with their private lives, but this concern isn’t new. Still, it will be up to employees and perhaps unions to ensure an individual’s right to privacy remains intact, a task that can become more challenging as more employees find themselves constantly connected to the office. Airports can rely on facial recognition technology tied with other smart technologies to detect potential security risks. Smart features can lead to better convenience as well; smart technology combined with artificial intelligence can provide useful metrics on airport crowds, which can be used to ease the process of traveling through airports. Airports and other high-risk security areas aren’t the only space that can benefit from this enhanced security; sports stadiums, for example, might be able to provide a safer experience with smart technology. These technologies, however, can easily encroach on an individual’s privacy by attaching activities in these spaces with a name and other identifying information. People might be willing to trade in some privacy for easier check-ins, but having information stored indefinitely might cause concerns.


Insurance on the Blockchain

Blockchain technology should be leveraged in the reinsurance process to increase interoperability. With a shared digital ledger, no longer can there be the discrepancy in data format, process, and standards that currently plague the industry. A permissioned blockchain ledger can be used to streamline communication, flow of information, and data sharing between insurers and reinsurers as an available and trusted repository of contract information. This becomes a faster, more efficient, and less-risky process as data related to loss records, asset ownership, or transaction histories is recorded on a blockchain that is trusted to be authentic and up-to-date. Access to this information can be heavily permissioned with granular access controls, with exhaustive rules governing read and write capabilities per user. Reinsurers can query a blockchain to retrieve updated, real-time, and trusted information rather than rely on a centralized insurance institution to report on data relevant to items (i.e. losses or transfer of ownership). This can massively expedite underwriting times.


Unbreakable smart lock devastated to discover screwdrivers exist

tapplock
Well, it turns out the lock broadcast its own Bluetooth MAC address over the airwaves, and uses that MAC address to calculate a key used to lock and unlock the device. Tierney cracked the system disturbingly quickly: "It upper cases the BLE MAC address and takes an MD5 hash. The 0-7 characters are key1, and the 16-23 are the serial number." The upshot? He was able to write a script, port it to an Android app, and open any nearby Tapplock wirelessly using his phone and Bluetooth, taking less than two seconds each time. "This level of security is completely unacceptable," he complained. "Consumers deserve better, and treating your customers like this is hugely disrespectful. To be honest, I am lost for words." The problem was so bad that Tierney informed the manufacturer, and gave it seven days before he went public with the fundamental flaw. Just hours before the deadline was up, Tapplock put out a security advisory warning that everyone needed to upgrade their lock's firmware "to get the latest protection." "This patch addresses several Bluetooth/communication vulnerabilities that may allow unauthorised users to illegal gain access," the company noted.


We’re Worrying About the Wrong Kind of AI


Technology that unthinkingly tramples over moral boundaries risks public rejection. Hence, researchers are openly discussing the ethical challenges likely to arise. Almost no one thinks a single cell is conscious, and today's organoids aren't either, but there's a continuous arc of increasing complexity that technology looks certain to traverse on the way to fully realistic human brains. What if a cherry-sized organoid of 10 million neural cells gains awareness of itself, or shows signs of distress? At what point does it become clear that organoids have crossed the boundary into beings deserving of rights, or warranting the appointment of a legal guardian? Right now, no one even knows how to reliably measure attributes of consciousness or thought in a piece of neural matter. We can do so in real brains, but what about things that are only partially like brains? Things may get weirder still with bits of artificial brain tissue implanted into the brains of other organisms, resulting in chimeras – organisms not fully of any one species, but part human and part mouse, pig or dog. Like AI based on computing, this research is racing ahead at alarming speed.


The fundamental elements of security in microservices


In a monolithic architecture, there can often be a single point of failure that could bring down an entire operation. In a microservices architecture, application components operate in isolation from one another, which means a security breach will not immediately affect the entire stack. Despite this architectural trait, you can still expect to face several complex security challenges. One challenge is that there is simply more attack surface to target. It's hard to keep an eye on everything within your stack when your application is made up of dozens of different microservices. A microservices-based app could use 10 containers, which translates to 10 times the number of instances to monitor. This challenge multiplies if those containers are regularly shut down and resurrected. The second issue involves the blurred perimeter of a microservices architecture. Unlike the clear-cut security perimeter that a firewall provides a monolithic app, there is no such definitive boundary with cloud-based microservices apps.


Modern Cybersecurity Demands a Different Corporate Mindset

Organizations may feel more confident about confronting the types of attacks that have become familiar in recent years, but they still lack the capability to deal with more-advanced, targeted assaults. Overall, 68% of respondents have some form of formal incident response capability, but only 8% describe their plan as robust and spanning third parties and law enforcement. To improve their chances of fighting back against cyberattackers, organizations will have to overcome the barriers that currently make it more difficult for cybersecurity operations to add value. For example, 59% of GISS respondents cite budget constraints, while a similar number lament a lack of skilled resources; 29% complain about a lack of executive awareness or support. The so-called disconnect between cybersecurity and the C-suite still persists, with a mere 36% of corporate boards having sufficient cybersecurity knowledge for effective oversights of risks, as highlighted in the EY report. Ultimately, organizations that fail to obtain executive support and devote the resources necessary for adequate cybersecurity will find it very difficult to manage the risks they face.


Don’t start the blockchain revolution without making security a top priority

blockchain security priority
The McAfee report details a 2017 cryptocurrency phishing scam in which a cybercriminal set up a fraudulent cryptocurrency “wallet” service. After collecting authentication information from the service’s users over the course of six months, the thief drained $4 million from unsuspecting customers’ accounts. Researchers provide examples of how cybercriminals using malware have been empowered by the proliferation of cryptocurrencies. The explosion of ransomware over the last few years has become operationally possible in large part due to the use of cryptocurrencies, which cloak cybercriminals’ identities associated with ransom payment transfers. The research illustrates the growing trends of malicious miners and cryptojacking, which create a vector for infection (via malware) and monetization (via mining). Recent McAfee Labs research in this cybercrime category found that total coin miner malware grew a stunning 629% in Q1 2018—from around 400,000 samples in Q4 2017 to more than 2.9 million samples in the first quarter of this year.



Quote for the day:


"Leadership is not about titles, positions or flowcharts. It is about one life influencing another." -- John C. Maxwell


Daily Tech Digest - June 14, 2018

Containerized Apps: An 8-Point Security Checklist

Image Source: Sheila Fitzgerald via Shutterstock
Containers allow applications to be abstracted from the underlying infrastructure on which they run. They give developers a way to package applications into smaller chunks that can run on different servers, thereby making them easier to deploy, maintain, and update. But securing containerized applications requires a somewhat different approach compared with securing traditional application environments. That's because they are a bit harder to scan for security vulnerabilities, the images on which they are built are often unverified, and standardization in the space is still evolving. Importantly, containers also can be spun up and down quickly, making them somewhat ephemeral in nature from a security standpoint. "Even though container technology may be a new concept to companies deploying them, the idea behind them should be familiar," says Kirsten Newcomer, senior principal product manager, security at Red Hat. Organizations need to think about security through the application stack both before deploying a container and throughout its life cycle.


Cisco opens DNA Center network control and management software

Analysts said opening DNA Center to the world is potentially a good move and could help customers more easily build strategic applications, but it will take a big effort make it a successful venture. “DNA Center is Cisco's strategic management platform going forward, and we believe it will consume functionality that is currently distributed across several products. This should help as Cisco customers have cited multiple management tools as an ongoing challenge,” said Andrew Lerner, research vice president with Gartner.  “So, this is a move in the right direction, but much work remains. For example, much of the data-center-networking portfolio including ACI and Nexus 9000 switches are not well integrated into DNA Center at this point,” Lerner said. “This announcement is about opening up DNA Center’s capabilities via API to do things such as orchestrating with other vendors and platforms – like Infoblox or ServiceNow. This can add to the value of the DNA Center, platform if third parties and customers use the APIs and/or SDK to develop integrations. However, that potential is largely aspirational at this point, as the depth and breadth of integrations that will be created are undetermined,” Lerner said.


Encryption is under attack, says Venafi CEO Jeff Hudson


Unfortunately, encryption is currently under attack from not one, but two sources – governments seeking backdoor access to encryption algorithms, and criminals wanting to breach encryption to gain access to sensitive data. Although she later backed down, former UK home secretary Amber Rudd demanded last year that technology companies create backdoors in messaging apps to give the security services access to encrypted communications.  More recently, FBI director Christopher Wray renewed his call for backdoors in encryption, exclusively for the use of law enforcement agencies, and US senator Dianne Feinstein is spearheading a campaign for law enforcement to have access to any information sent or stored electronically.  “I think there is a naivety about the cyber world and how to secure it,” said Hudson. “People tend to run off and make proclamations, like installing a backdoor is a really good idea.” Governments want encryption to work, but they also want to be able to access encrypted information in order to pursue criminals. However, installing a backdoor in an encryption system would create a fundamental vulnerability in the protection that would inevitably be exploited.


Optimizing an artificial intelligence architecture: The race is on


Many GPU-based solutions are based on direct-attached storage (DAS) deployment models, which makes AI's distributed training and inferencing very difficult to do. As a result, staging and management of these deep learning data pipelines can become complex, time-consuming tasks. This bottleneck is being addressed with non-volatile memory express, or NVMe, which was originally designed to provide better connectivity between solid-state drives (SSDs) and traditional enterprise servers. Now, it is being baked into new I/O fabrics to improve AI workloads. The thinking is that NVMe over Fabrics (NVMeF), as these interfaces are called, will help reduce the overhead in converting between network protocols and in managing the idiosyncrasies of each type of SSD. This could allow CIOs to justify the cost of AI apps that use larger data sets. There are risks with NVMeF, starting with the high cost of investing in the bleeding edge. Plus, the industry has not settled on a vendor-neutral approach to NVMeF yet, which means CIOs also need to be wary of vendor lock-in as they choose a product.


What is a CASB? What you need to know before you buy

cloud security
Gartner predicts that by 2020, more enterprises will use CASBs than not, which represents a big jump from the 10 percent that used them at the end of 2017. Several years ago, many enterprises purchased CASBs to stem the tide of what was then called shadow IT and is now considering standard operating procedure in many businesses. IT managers would get a call from their commercial Dropbox sales rep and be told that hundreds of their users were using personal Dropbox accounts, which was often news that they didn’t want to hear. That was the initial sales pitch by the CASB vendors: we can discover where all your cloud data lies and help to protect it. Traditional security tools didn’t provide this visibility, especially when the network traffic never was seen by the corporate data center. “I want to have control over my data, even when it isn’t residing in my own machines,” said Steve Riley of Gartner. The first attempts at using CASBs were eye-opening for many corporate IT managers. When they were first deployed, IT would find ten times the number of cloud services in use than they thought they had estimated, according to Riley. That turned into a big selling point.


How do you retain your data scientists?

The single largest effect we observed involved office politics, which can be a serious problem for data scientists because many feel poorly equipped to handle it. And companies that are building data science teams may struggle to provide the support and direction they need—especially if they’re new to the game. Data scientists in a strife-ridden work environment—compared with one free of infighting, and with all other factors being equal—had job satisfaction that was 1.3 points lower, making it the biggest move we saw in the entire data set. But it would be a mistake to think that allowing them to work remotely would be an effective bandage for a difficult office environment. We discovered that the more people work off-site, the more affected they are by political issues: Remote workers in politicized work environments experienced a job satisfaction decline of 1.5 points, compared with a decline of 1.2 points for employees always in the office. Clearly, a strong corporate culture gives you the flexibility to allow more remote work.


predictive analytics for emails
Predictive analytics has been used in a variety of fields to minimize waste, increase effective utilization of resources and help all stakeholders find better parity with organizational goals. It hasn’t been put to the test in the field of innovation to the same extent as fields of finance, public polling, law enforcement and a number of other fields. However, that is primarily because it takes time to develop predictive analytics models and the forecasts are often made years into the future. ... Demographics are major predictors of demand for various products and services. One of the main reasons that companies have difficulty introducing successful products to the market is that they have a hard time forecasting changes in demographics. They often assume that the composition of income, ethnicities, gender, and other factors among the general population will remain static. They are often blindsided when the representation of some groups grows faster than expected, which changes the level of demand for their product. ... Studying other products can be a great way to determine the likelihood of success for a similar one. If a product developed around a particular market failed in the past, there is a good chance that it will fail again, unless there has been a major shift in the market.


Is your company part of the GDPR 'mobile loophole'?

Mobile tech, and especially mobile brought into companies through BYOD, has unique challenges for companies that need to comply with General Data Protection Regulations (GDPR) — and that’s virtually all companies, not just the ones in Europe. The regulations compel companies to manage personal data and protect privacy, and they provide individuals to have a say in what and how data about them is used. GDPR has several disclosure and control requirements, such as providing notice of any personally identifiable data collection, notifying of any data breaches, obtaining consent of any person for whom data is being collected, recording what and how data is being used, and providing a right for people whose data is being collected to see, modify, and/or delete any information about them from corporate systems. The problem is many corporate systems now extend into mobile branches that include smartphones and, in some cases, tablets. Analysts at J.Gold Associates, LLC. estimate that in about 35 to 50 percent of cases, these devices are not actually corporate devices, but personal devices being used by employees of the company in their daily work.


How Technology Is Impacting the Future of Work Through Fragmentation


What technology has done is break up this glue between value chains as it has made transaction costs low by lowering the cost of communication and information exchange. As the cost of exchanging information goes down, at some time the balance between bottom-up information flow and top-down decision-making goes out of kilter. There is too much information to process for the top to make efficient decisions. To use a system’s term, the throughput of information renders centralized decision-making an inefficient value chain mechanism. The system - in this case a company or an organization - therefore adapts by fragmenting its structure and changing the balance between responsibility and accountability of its agents. A new structure begins to emerge with new job titles, new leadership positions, new jobs, new requirements, and even has cultural impacts. Capitalism after all is as much a cultural artefact as it is an economic one.  It is this adaptive phenomenon that I call the Theory of Fragmentation, this continuous breaking of existing rigid structures to create a flatter, more spread out structure that stems from the combinatorial evolution of technology


What is cloud-native? The modern way to develop software

What is cloud-native? The modern way to develop software
“A cloud native app is architected specifically to run in the elastic and distributed nature required by modern cloud computing platforms,” says Mike Kavis, a managing director with consulting firm Deloitte. “These apps are loosely coupled, meaning the code is not hard-wired to any of the infrastructure components, so that the app can scale up and down on demand and embrace the concepts of immutable infrastructure. Typically, these architectures are built using microservices, but that is not a mandatory requirement.” For cloud-native applications, the big difference then is really how the application is built, delivered, and operated, says Andi Mann, chief technology advocate at Splunk, a cloud services provider. “Taking advantage of cloud services means using agile and scalable components like containers to deliver discrete and reusable features that integrate in well-described ways, even across technology boundaries like multicloud, which allows delivery teams to rapidly iterate using repeatable automation and orchestration.” Cloud-native app development typically includes devops, agile methodology, microservices, cloud platforms, containers like Kubernetes and Docker, and continuous delivery—in short, every new and modern method of application deployment.



Quote for the day:


"If no good can come from a decision, then no decision should be made." -- Simon Sinek


Daily Tech Digest - June 13, 2018

5 Free Online Machine Learning Courses

(Image: NicoElNino/iStockphoto)
"Data scientist roles have grown over 650 percent since 2012, but currently 35,000 people in the US have data science skills, while hundreds of companies are hiring for those roles - even those you may not expect in sectors like retail and finance - supply of candidates for these roles cannot keep up with demand," the report said. What's more, the report cites the growth rate for machine learning engineer jobs to be higher than that of any other job over the last 5 years. Just what is a machine learning engineer, and how can you get the skills needed to perform this job? The LinkedIn report also looked at the career paths of professionals who are currently employed as machine learning engineers. The top jobs leading to that the machine learning title are software engineer, research assistant, teaching assistant, data scientist, and system engineer. The top skills for that job title are machine learning, research, algorithms, software, and deep learning. Experts at the recent AI Summit at Interop ITX 2018 said that to keep up with their own skill sets they attend conferences, read academic papers, participate in professional communities. Cloudera Fast Forward Labs Data Scientist Friederike Schuur said that there is so much material online that people can use to improve their skills.



Regulatory Compliance as a Strategic Weapon – conclusion

Offensive and defensive data strategies have different objectives. The objective of a defensive data strategy is to keep data secure and private, while also maintaining proper governance and regulatory compliance. This requires understanding all requirements and effectively implementing the right processes to meet them. In the end, you create an SVOT data set to share with regulators. With an offensive data strategy, the goal is to improve the firm’s competitive position, enter new markets, or grow the business. In these cases, you may want to manipulate data in a different way, or at least look at it through a different lens. An institution doesn’t necessarily have to comply with a “single version of the truth” when taking on these initiatives. Therefore, a firm could end up with multiple versions of truth. The defensive data strategy is usually the responsibility of the data engineers and Chief Data Officer. The offensive strategy, however, is best in the hands business analysts. Analysts need the freedom to create new “versions of a truth” that help them develop new business initiatives.


See How This Hospital Uses Artificial Intelligence To Find Kidney Disease


"We now have exponential increases in digital healthcare data due to the internet, electronic health records, personal health records, cellphones, wearable devices, digital medical devices, sensors and many other factors," said Drew Gantt. partner and co-chair, Healthcare, Venable LLP. "This data will fuel algorithmic solutions, clinical decision support tools, and visual tools in the near term." Gantt says that healthcare businesses should focus on using AI technology to solve problems, uncover value, promote their mission and create competitive advantage rather than merely using it for technology's sake or to have the latest thing. Mount Sinai Hospital in New York announced a partnership on June 1, 2018, with the AI healthcare startup, RenalytixAI, to create an AI tool that identifies patients at the hospital who are at risk for advanced kidney disease.  In 2017, the Global Kidney Health Atlas reported that 1 in 1o people worldwide has chronic kidney disease. And, a patient with kidney disease traditionally won't begin treatment until dialysis is needed.


What’s the Difference Between Data Integration and Data Engineering?

Data Engineering develops, constructs and maintains large-scale data processing systems that collects data from variety of structured and unstructured data sources, stores data in a scale-out data lake and prepares the data using ELT (Extract, Load, Transform) techniques in preparation for the data science data exploration and analytic modeling: Collects the data from a variety of traditional and non-traditional sources, stores it in a data repository, cleanses and integrates the data (data prep) for analysis; Designers, builders and managers of the information and big data infrastructure. They co-develop the architecture that helps analyze and process data that the organization required and further optimize those systems to perform smoothly; Evaluates, compares and improves the different approaches including design patterns innovation, data lifecycle design, data ontology alignment, annotated datasets, and elastic search approaches; Prepares the data for the data scientist exploration and discovery process. For example, we have data containing 30 attributes where two attributes are used to compute another attribute (for example, an index), and that computed feature is used for further analysis.


Guiding Brands Through Digital Transformation

Guiding Brands Through Digital Transformation
Digital transformation has allowed some brands to move from a “me too” position to one of leadership where they serve as trailblazers who build loyalty and turn consumers into brand ambassadors. It can allow brands to pivot into new markets, or provide new offerings that leverage and enhance their current market position, propelling them to the front of the pack. For some brands this digital transformation might be their first foray into utilizing technology, for others it could be a change or improvement to an existing technology that leverages a digital ecosystem in a way previously not considered. For everyone, it is an essential journey. This digital journey is one of the most critical investments that a brand will undertake in the coming years. It has the potential to fundamentally change a brand’s relationship with its customers, as well as to drive entirely new business models. This journey becomes much easier when undertaken with partners who bring cross-functional skills to the table. At Radius Innovation & Development, we help companies find their way through the strategy of ideation and digital transformation.


Why Connected Cars Need a New Approach for Security


At the least dangerous end of the scale the risks include hackers using ransomware to take control of infotainment systems and demand payment in exchange for users regaining control of their devices. But as more parts of the car become connected, the dangers grow. Smart locks are fast becoming a feature of many cars, allowing drivers the added convenience of keeping their keys in their pocket when they open up and start the car, and driving the growth of the P2P car sharing economy by giving approved individuals access to a lender’s car without having to meet face to face. However, without the proper protections, drivers could soon discover that hackers are able to cheat the system and steal their vehicles. The dangers don’t stop at theft. White hat hackers have already demonstrated how they can access a car’s CAN protocol to shut down a vehicle’s security mechanisms, including the airbags and brakes. It makes cyber attacks a potentially lethal weapon and gives manufacturers a powerful incentive to tighten their security measures.


The Next Frontier Of Artificial Intelligence: Building Machines That Read Your Emotions


Emotional intelligence is what allows us to take the feelings and considerations of other people into account in the solutions we make. And so far, progress here has been limited. Alexa, helpful as she may be in some circumstances, will not consider your feelings, or those of others affected by her actions, as she keeps your smart home running smoothly. But all that could be about to change – in fact, has to change, if AI is to reach its potential as a tool for assisting in our business and day-to-day lives. Recently, emotion-focused AI developer Affectiva became one of the few small businesses to be asked to join the Partnership on AI to Benefit People and Society. The interest of the “grand masters” of AI which make up the partnership – Google, Microsoft, Facebook, etc – in Affectiva’s growing business is a sure sign that this overlooked aspect of AI is starting to get the attention it deserves. Affectiva co-founder and CEO Rana el Kaliouby talked to me about her company’s work to develop what she calls “multi-modal emotion AI”. There may already be a growing understanding of how sentiment analysis can help machines understand how humans are feeling, and adapt their behavior accordingly.


How change data capture technology drives modern data architectures


The methods and terminology for data transfer tend to vary by target. While the transfer of data and metadata into a database involves simple replication, a more complicated extract, transform and load (ETL) process is required for data warehouse targets. Data lakes, meanwhile, typically can ingest data in its native format. Finally, streaming targets require source data to be published and consumed in a series of messages. Any of these four target types can reside on-premises, in the cloud, or in a hybrid combination of the two. In practice, most enterprises have a patchwork of the architectures described here, as they apply different engines to different workloads. A trial-and-error learning process, changing business requirements, and the rise of new platforms all mean that data managers will need to keep copying data from one place to another. Data mobility will be critical to the success of modern enterprise IT departments for the foreseeable future. ... By moving and processing incremental data and metadata updates in real time, these organizations have reduced or eliminated the need for resource-draining and disruptive batch (a.k.a. full) loads.


The future of work requires a new C-suite

Foutty referred to the C-suite's traditional structure as being all about command and control, with clear direction and dictation from the CEO. A leader's geographic location was often paramount for his involvement in C-suite 1.0: The closer to the CEO, the better. C-suite 2.0, where most C-suites are today, focuses on functional expertise and depth of knowledge. Those hired into leadership roles of CIO, CFO, CMO and so on were seen as "the best and brightest in their domain," Foutty said. It is this unforgiving focus on expertise that triggered the still ongoing proliferation of C-suite titles,according to Foutty. Take the chief digital officer (CDO) as an example. As companies start to develop a digital business strategy, they are sometimes prone to establish a CDO role to lead the charge. But the role of the CDO is a Band-Aid for a bigger trend that will affect the future of work, in Deloitte's view. Technology -- be it digital, AI or robotics -- is no longer a back-office modernization or an upgrade. It's changing how companies compete for customers and for employees.


Automation critical to scalable network security

Automation critical to scalable network security
Changing firewall rules can be a long and painful process, and removing rules can be even more painful, which is why firewall rules wind up being unmanageable in a fairly short period of time. With Tufin's product, though, customers can make changes and propagate them across all the firewalls in a matter of minutes instead of having to touch each box one at a time. Another new feature of Orchestration Suite R18-1 is the ability to migrate the configuration from Cisco’s Adaptive Security Appliance (ASA) to Firepower. Without an automated tool, migrating from one product to the other can take a significant amount of time and be filled with errors, as rules have to be manually ported from one system to the other. Any mistake is then carried over. The complexity of this process often holds companies back from upgrading or switching platforms. Tufin’s products make moving from ASA to Firepower error-free, reducing the risk of switching while validating the rules to ensure no broken or expired rules are migrated. The enhanced support in TOS 18-1 will allow security professionals to ensure they meet compliance mandates with an auditable, documented process for Firepower policy changes.



Quote for the day:


"Positive and powerful things happen when you distance yourself from negativity." -- Steve Gutzler


Daily Tech Digest - June 12, 2018

Apple Watch, IoT, Internet of Things
The more common, and much harder to deal with problem, is IoT devices located in places that can’t easily be reached. Last fall, for example, I wrote about the (voluntary) recall of almost half a million St. Jude Medical IoT devicesdue to a risk of hacking. No big deal, right? Devices get recalled for security fixes all the time. Unfortunately, in this case, the devices involved were pacemakers installed not in some easy-access equipment rack, but in patients’ chests. Swapping them out would be a verybig deal (fortunately, as of publication of the post, none of the pacemakers had actually been compromised). The issue goes much further than watches and pacemakers. Smart cars have IoT systems that will become obsolete long before the vehicles in which they’re installed reach the end of their useful lives (sort of like aging 8-track players still riding around in the dashboards of cars from the 1980s). And it gets worse. For example, many industrial sensors essential to delivering the benefits of IoT are located in hard-to-reach spots where replacement or upgrades would be difficult, expensive, or hazardous.



Industry must do more to open up cyber security profession


“It is scary that for many young people, without the right words at the right moment. It is all too easy to end up going down the wrong path, not because they are bad people, but just because they are looking for opportunities to apply the skills they have. I had no interest in stealing credit cards, but I did want to hack stuff,” Lyne added. According to Lyne, there are two important issues that industry in general, and the cyber security industry in particular, needs to focus on. ...  “I have spoken to many amazingly talented people coming out of competitions aimed at finding cyber security talent who are shocked that their skills can actually be used as a career. ... “Second, we need to ensure that employers are not overlooking talented people by having unrealistic recruitment criteria. We are seeing people who have proven that they have the right skills, and they struggling to find a job because employers are insisting on things like five years’ experience or formal certifications. As a result, they are struggling to get into the industry to prove their worth to get on to the career ladder,” said Lyne.


Digital disruption: How to create a culture of innovation in your organisation

"Innovation should become a day-to-day element of who you are," he says. "Once you get into a can-do mind set, and innovation starts to feel good, you start to develop confidence. You then realise it's OK to try things and make mistakes, and the business starts to create a culture of innovation." O'Connor says the firm has acquired many pioneering startups, where the entrepreneurs remain part of the wider company. That retained knowledge meant that when he had an idea, O'Connor could seek out mentors who had been on a similar journey beforehand. This supporting organisational culture provided guidance, structure and control through an informal mentoring network. O'Connor says that "mini Silicon Valley effect", where you have a critical mass of people who care and collaborate, is when you start making stuff happen. "An idea in isolation is a seed in concrete -- it can't take root. Idea generation is just part of the problem. You need the framework and culture to prove that thinking creatively can produce great results," he says.


People Are Changing the Way They Use Social Media

People walk past a Facebook logo on a shop window
What Facebook is not about is data misuse. That, along with spam, fake news, and clickbait, are things that happen on Facebook, as a recent apology ad from the company put it, but they’re not what Facebook is about. What does Facebook do? It connects. What is Facebook? A community. What is Facebook for? It’s for friends. Research shows that people become closer to each other through intimate self-disclosure. But there’s only so much connecting social-media platforms can do if people are too concerned about privacy to use them for the full breadth and depth of human communication. Paradoxically, these tools that were built to bolster relationships may, by their very nature, be keeping people at a distance from each other. I recently conducted a survey, trying to determine how much people censor themselves on social media and whether the Cambridge Analytica scandal has changed their behavior on Facebook and other platforms. I also shared my survey results with Sauvik Das, an assistant professor of interactive computing at the Georgia Institute of Technology, and Sarita Schoenebeck, the director of the Living Online Lab at the University of Michigan.


Business tech after GDPR: Here's what comes next

For those companies that have done absolutely nothing, I think they're probably in a worse position, if not a really bad position to be quite honest, because you never really know when a regulator might start knocking on the door. But if you've got a partial program in place, you can show an effective program or an effective project plan for putting that program in place, I think most regulators would understand that this is an evolving thing for most organizations and as long as you're in business, you're going to have data, you're going to have to come up against new challenges with the data, as your business moves and changes and modifies, you're going to have to change your data protection profile. So it's going to be an evolution. To me, I think, the date last Friday was really the beginning of the end. Or the end of the beginning, sorry. In regards to putting an effective program in place for data management and data protection.


4 tips for retaining key software and IT talent


A core component of retaining your employees, regardless of their position, is learning what matters most to them. When it comes to tech professionals, remote work options and autonomy are the most in-demand perks. First, prioritize remote work options. If entirely remote positions don’t mesh well with your collaborative company culture, consider allowing tech-focused employees to work remote for three to four days per week. With the emergence of group chat apps like Slack, having employees work remotely doesn't have to mean decreased communication or collaboration. In fact, it might actually lead to increased collaboration, productivity and innovation. Second, give your employees the autonomy they need to get the job done. The reason that many tech professionals are paid so well is because they have niche skills that most other people don't. Trust them and the knowledge they have to get the job done well. There's rarely, if ever, a need to micromanage high-end software development talent. If developers or software engineers feel like they are constantly asked to do things a certain way that doesn't make sense to them, they won't stick around long enough for you to solve the management problem.


Email-based cyber attacks gathering momentum


Email is the number one threat vector, according to Barracuda researchers, precisely because it allows malicious third parties to directly target employees within an organisation, underlining the importance of user education around email-related cyber threats. Despite the availability of tools and technologies such as email encryption, data loss prevention, social engineering detection, phishing simulation and artificial intelligence that can help mitigate these threats, the survey revealed that the vast majority of respondents believed user training and awareness programmes were a vital pre-requisite to improving email security. Survey respondents recognised the insider threat, claiming that poor employee behaviour (79%) was a greater email security concern than inadequate tools (21%). There was most concern over individual staff members falling victim (47%), although executives (37%) were also viewed as a potentially dangerous weak link in the security chain. Finance (26%) and sales (18%) departments were viewed with most caution. Topping concerns for respondents were the fact that these roles and departments have access to sensitive information and systems and were most likely to be targeted.


Google adds single-tenant VMs for compliance, license cares

Google still lags behind AWS and Microsoft Azure in public cloud capabilities, but it has added services and support in recent months to shake its image as a cloud valued solely for its engineering. Google must expand its enterprise customer base, especially with large organizations in which multiple stakeholders sign off on use of a particular cloud, said Fernando Montenegro, a 451 Research analyst. Not all companies will pay the premium for this functionality, but it could be critical to those with compliance concerns, including those that must prove they're on dedicated hardware in a specific location. For example, a DevOps team may want to build a CI/CD pipeline that releases into production, but a risk-averse security team might have some trepidations. With sole tenancy, that DevOps team has flexibility to spin up and down, while the security team can sign off on it because it meets some internal or external requirement. "I can see security people being happy that, we can meet our DevOps team halfway, so they can have their DevOps cake and we can have our security compliance cake, too," Montenegro said.


Active-Active Geo Distribution Strategy: Comparing Merge Replication and CRDT


Merge replication is a common technique employed by relational databases. This technique allows you to deploy a distributed database solution in which each database server has its own copy of data. An external agent then collects changes to the local copies and merges them in an effort to force all of the database servers to contain the same copy of data. The topology of typical merge replication has database servers in multiple regions and follows the publisher/subscriber model. One of the servers is identified as a primary server or a publisher, while the rest of the servers are subscribers. In a normal flow, all changes to the publisher trickle down to the subscribers. However, in merge replication, subscribers can make database changes too, and merge all their local changes with the publisher. These changes will eventually go to all subscriber servers. Assuming no conflict occurs during the merge, all changes made to either the publisher or the subscribers will eventually converge as the same copy. The “Merge Agent” is an external service responsible for gathering all the changes to the local database servers and merging them into a single data set. When a conflict occurs, the merge agent follows a predefined set of rules to resolve the conflict.


There’s still a lot of life left in tape backup

This industry likes to abandon technologies as soon as it adopts them, but a few find a way to hang around. I recently purchased a car, and in the finance office was a dot matrix printer, chugging away at the same multipage forms I saw used more than 25 years ago. Tape backup is also hanging in there. With data being produced in ever-increasing numbers, it has to be stored somewhere, and hard drives aren’t enough. For true mass backup, enterprises are still turning to tape backup, and the LTO Program Technology Provider Companies (TPCs) say 2017 shipments grew 12.9 percent over 2016 to 108,457 petabytes (PB) of tape capacity. LTO TPCs is a group consisting of three tape backup providers: HPE, IBM, and Quantum. There are other tape backup providers, such as Oracle, which inherited the StorageTek business from Sun Microsystems and still sells them, but it was not included in the count. Actual unit shipments dropped slightly in 2017, which the organization attributes to customers waiting for new LTO-8-based units to ship. LTO-8 technology offers a compressed transfer rate of 750MB/sec., an improvement over LTO-7’s 400MB/sec. And capacity is increasing to 30TB compressed per cartridge, which is up from 22TB compressed in LTO-7.



Quote for the day:


"You can't lead anyone else further than you have gone yourself." -- Gene Mauch