Daily Tech Digest - August 23, 2018

Google Home at Work
It wouldn't make sense for every office environment, of course; having such a gadget in a crowded cubicle farm would probably lead to more annoyances (not to mention mischievous co-worker interference) than anything. But if you have a relatively isolated space in which you work, be it your own executive suite (look at you!) or a more humble home office (like mine), you might be surprised at how handy a Google Home or Smart Display could be.[Get fresh tips and insight in your inbox every Friday with JR's new Android Intelligence insider's newsletter. Exclusive extras await!] Now, is there a fair amount of overlap between what a Google Home or Smart Display on your desk can do and what you could already do with your phone? You'd better believe it. But performing a task on a permanent, stationary device can often be easier and more effective than futzing around with your phone. Using a smart speaker also doesn't wear down your precious mobile battery, and the device's standalone nature makes it better suited for certain types of tasks.


Service mesh architecture radicalizes container networking

A service mesh architecture uses sidecar containers to facilitate network traffic
To Thomas, true microservices are as independent as possible. Each service handles one individual method or domain function; uses its own separate data store; relies on asynchronous event-based communication with other microservices; and lets developers design, develop, test, deploy and replace this individual function without having to redeploy any other part of the application. "Plenty of mainstream companies are not necessarily willing to invest quite that much time and money into their application architecture," Thomas contended. "They're still doing things in a more coarse-grained manner, and they're not going to use a mesh, at least until the mesh becomes built into the platform as a service that they're using, or until we get brand-new development frameworks." Some early adopters of the service mesh architecture don't believe a slew of microservices is necessary to benefit from the technology. "It allows you to push traffic around in a centralized way that's consistent across many different environments and technologies, and I feel like that's useful at any scale," said Zack Angelo, director of platform engineering at BigCommerce, an e-commerce company based in Austin, Texas, that uses the Linkerd service mesh. "Even if you have 10 or 20 services, that's an immensely useful capability to have."


Redefining work in the digital age

cio1002018 feature story changing the way we work
To ensure a successful transition, experts say organizations must figure out the right intersection of humans and intelligent machines. Fifty-four percent of those surveyed by Accenture said human-machine collaboration is important to achieving strategic priorities, while 46 percent believe traditional job descriptions are now obsolete and 29 percent have already redesigned job roles extensively. “We’ve never seen change like this,” says Katherine Lavelle, managing director of Accenture’s Strategy, Talent & Organization practice in North America. “This is about generating new levels of capabilities and results for clients and customers augmented through smart automation and humans. Whoever figures out the collaboration between the two is poised to win the war.” Training and reskilling workers will be essential to creating an enhanced employee experience that redefines the nature of work. “In some ways, we’ll go back to the basics on things we put a value on prior to automation,” Lavelle says.


7 steps to better code reviews

7 steps to better code reviews
Code review had been demonstrated to significantly speed up the development process. But what are the responsibilities of the code reviewer? When running a code review, how do you ensure constructive feedback? How do you solicit input that will expedite and improve the project? Here are a few tips for running a solid code review. ... Try to get to the initial pass as soon as possible after you receive the request. You don’t have to go into depth just yet. Just do a quick overview and have your team write down their first impressions and thoughts. Use a ticketing system. Most software development platforms facilitate comments and discussion on different aspects of the code. Every proposed change to the code is a new ticket. As soon as any team member sees a change that needs to be made, they create a ticket for it. The ticket should describe what the change is, where it would go, and why it’s necessary. Then the others on your team can review the ticket and add their own comments.


How advanced OCR found new life in big data systems

One reason that OCR was rarely used until recently is that it wasn’t especially reliable. Even when, in the early 2000s, the programs reached about 95% accuracy, businesses ran the risk that software would produce documents containing major mistakes – and particularly with numerals, such errors can be labor intensive to identify and correct. Analysts would do just as well entering the data by hand. However, now that the scan accuracy is significantly improved, the resultant data is more valuable, and analysts need only cross-reference the scans with original documents if something in the content doesn’t make sense. NLP has also helped increase the accuracy of OCR scans. For example, older OCR programs might read chart lines as the letter ‘L’ or number ‘1.’ NLP is context dependent, however, so it can identify if something is a chart or graph, whether it’s reading a bill or an invoice, and other types of nuanced content.


Climb the five steps of a continuous delivery maturity model


A maturity model describes milestones on the path of improvement for a particular type of process. In the IT world, the best known of these is the capability maturity model (CMM), a five-level evolutionary path of increasingly organized and systematically more mature software development processes. The CMM focuses on code development, but in the era of virtual infrastructure, agile automated processes and rapid delivery cycles, code release testing and delivery are equally important. Continuous delivery (CD), sometimes paired with continuous integration to make CI/CD, is an automated process for the rapid deployment of new software versions. A complicated process, CD includes several steps that span multiple departments. CI/CD and DevOps can prove daunting to organizations that view modernization as a dichotomy: Either you're DevOps or you're legacy. But continuous delivery is an efficiency improvement that can evolve in stages.


Analysis: Anthem Data Breach Settlement

"Credit monitoring itself as an award is frankly not that effective, at least in my personal view," DeGraw, who was not involved in the Anthem case, says in an interview with Information Security Media Group. "A persisting problem is that post-breach, [bad actors] can still potentially use the stolen records, including medical information, to cause harm." A more affective approach for most consumers, DeGraw says, is to put a credit freeze on their accounts "which is a bit more cumbersome at times ... but that's a more effective remedy." For breach victims, "there is no easy way to clean up your life," the attorney says. "You have a fair number of out-of-pocket costs, including taking a day off [from work] to file a report ... and maybe hire people to clean up your accounts and other things that have been opened in your name. It can be a hassle and it's time-consuming and it doesn't go away soon because we can't change our Social Security numbers or healthcare numbers relatively easily."


Testing Programmable Infrastructure - a Year On


Worse than the technical challenges, we faced cultural challenges too. Sysadmins and testers aren't used to working with one another! The project made it very clear to me that programmable infrastructure is becoming widespread. There are very specific domain issues that make testing it tricky. But it felt like nobody had the answers. Infrastructure resources are critical to successful software. If there's a problem with your database or your load balancer, it now could be due to committed code. That code is production code, so we should test it! Over a year has passed since I first presented that talk. Even longer since the project which inspired me to present it. I have been on a number of other projects since, and my thinking has changed too. ... When testing anything new, it’s important to revisit fundamentals. When I first gave my talk, I focused a lot on howwe tested it, but not a lot on whywe tested it. I cannot tell you what your cloud infrastructure landscape looks like. The topic is very broad, and fast changing.


Microsoft Office 365 Turns Data Storage Upside Down

In addition to syncing and storing your own private files, the OneDrive for Business client can also sync corporate data stored elsewhere in SharePoint. So this client provides access to files in both locations. Best of all you can choose what you “see” in your OneDrive for Business client and what you are going to sync locally to your computer. To muddy the waters just a bit more, Microsoft recently announced that One Drive for Business will soon start to offer the option to automatically sync your local profile default data locations such as the documents and pictures folders. And it will also have one-button ransomware protection for your files. So now we’re storing personal data, bits of the user profile, and we’re syncing locally some or all of the data in SharePoint. But we’re still upside down from how business has historically stored data because our corporate space is smaller than the personal space.


How security binding choices impact everything in global file search

Few non-software architects understand the outsized impact that security binding has on global file search performance, scalability, multi-tenancy, hardware, supporting infrastructure, capital expenditures, operating expenditures and the total cost of ownership. The wrong security binding choice can add hundreds of thousands to millions of dollars to the TCO. From additional expensive hardware, supporting infrastructure, maintenance, software licensing, training, power, cooling, shelf space, rack space, cables, conduit, transceivers and allocated overhead, the costs can be shockingly high. When we examine the pros, cons, tradeoffs, consequences, and workarounds, for each of three different security binding choices – late binding, early binding, and real-time binding - we find that real-time binding provides the performance of early binding with the accuracy of late binding.



Quote for the day:


"Good leaders must first become good servants." -- Robert Greenleaf


Daily Tech Digest - August 22, 2018


I have been in the space of artificial intelligence for a while and am aware that multiple classifications, distinctions, landscapes, and infographics exist to represent and track the different ways to think about AI. However, I am not a big fan of those categorization exercises, mainly because I tend to think that the effort of classifying dynamic data points into predetermined fixed boxes is often not worth the benefits of having such a “clear” framework. I also believe this landscape is useful for people new to the space to grasp at-a-glance the complexity and depth of this topic, as well as for those more experienced to have a reference point and to create new conversations around specific technologies. What follows is then an effort to draw an architecture to access knowledge on AI and follow emergent dynamics, a gateway of pre-existing knowledge on the topic that will allow you to scout around for additional information and eventually create new knowledge on AI. I call it the AI Knowledge Map (AIKM).




Using innovation labs and accelerators as a form of R&D to learn about certain industries is a great idea, as long as leaders realise that R&D and innovation are not the same thing. Innovation is the combination of clever new ideas and technologies with sustainably profitable business models. So the question still remains - as we work with startups or internal teams to learn about new industries, how are we going to convert those learnings into long-term revenues for the company? We have to design our labs and accelerators to be able to extract insights and create value. Other companies are very explicit about using innovation labs to accomplish their bottom line goals. These leaders are focused on balancing their portfolio, adding new business models and revenues to the company. The biggest challenge these leaders face is what to do with successful innovations. Not every product or service from the innovation lab or accelerator will be successful. But once we have something promising, we need to figure out a way to scale that product or service.



The Case for Work from Home and Flexible Working


To attract and retain employees under the old paradigm the business must deliver an EVP that provides career opportunity and professional development, regularly sign-posted by role expansion and salary growth. ... An alternative operating model is required for the front line. An operating model that supports a new recruitment promise based on the provision of flexible working arrangements that allows employees to manage their work life priorities. ... The flexible working operating models will be designed to reflect our evolving understanding of what motivates employees, this also forms an important part of the EVP. It is based on engaging the intrinsic motivators of autonomy, mastery and purpose. ... Flexible working enables the front-line to be deployed dynamically to where customers choose to be, whether that be in store, online or on the phones. Creating opportunities to vary not just “where and when I work” but also “what I do” empowers our employee to genuinely design their own work experience. Role variation provides a unique and highly competitive dimension to the EVP and it enhances the businesses resilience to uncertainty.


Data management: Using NoSQL to drive business transformation

Because of our core architecture it is very important to our customers, who are deploying applications, that they can do that in near real-time speed. So, when we think about the capabilities that we have layered together inside of our data platform, we're unlocking the power of NoSQL, but doing so in a way that enables application developers to very quickly learn the platform, and help them become efficient in picking up applications to take advantage of it. Now, that core platform can run at any point in the cloud -- everything from the major public cloud to customer's private data centres -- and it can also run on premise. Now we've extended the power of the platform out to the edge. We have a solution that we have called Couchbase Lite. This is small enough that it can run inside an application on a mobile device, and you still get the full power of the platform, including the data structure and our ability to query and, very soon, you will be able to run operational analytics on top of that.


Balancing innovation and compliance for business success


Whether it is the need for greater transparency with user data, improved reporting methods for the regulator or enhanced security measures, any new technology being introduced will need to carefully assessed so that businesses recognises and understand whether it is compliant with current legislation. While staff trials can often help to raise any last-minute concerns about the functionality of new IT solutions, management also needs to include the IT team and compliance teams in this activity. In many cases, the IT department is left out of discussions regarding data management and compliance, making it hard for them to identify any potential conflicts in this area. In order to address this issue, IT needs to have a greater understanding of the wider business. In particular, the IT department needs to be as involved in the company’s wider compliance measures as it is with particular applications or systems, as this will make it much easier to establish what controls need to be put in place


It’s Time for Token Binding

What is so great about token binding, you might ask? Token binding makes cookies, OAuth access tokens and refresh tokens, and OpenID Connect ID Tokens unusable outside of the client-specific TLS context in which they were issued. Normally such tokens are “bearer” tokens, meaning that whoever possesses the token can exchange the token for resources, but token binding improves on this pattern, by layering in a confirmation mechanism to test cryptographic material collected at time of token issuance against cryptographic material collected at the time of token use. Only the right client, using the right TLS channel, will pass the test. This process of forcing the entity presenting the token to prove itself, is called “proof of possession”. It turns out that cookies and tokens can be used outside of the original TLS context in all sorts of malicious ways. It could be hijacked session cookies or leaked access tokens, or sophisticated MiTM. This is why the IETF OAuth 2 Security Best Current Practice draft recommends token binding, and why we just recently doubled the rewards on our identity bounty program.


Artificial General Intelligence Is Here, and Impala Is Its Name


AGI is a single intelligence or algorithm that can learn multiple tasks and exhibits positive transfer when doing so, sometimes called meta-learning. During meta-learning, the acquisition of one skill enables the learner to pick up another new skill faster because it applies some of its previous “know-how” to the new task. In other words, one learns how to learn — and can generalize that to acquiring new skills, the way humans do. This has been the holy grail of AI for a long time. As it currently exists, AI shows little ability to transfer learning towards new tasks. Typically, it must be trained anew from scratch. For instance, the same neural network that makes recommendations to you for a Netflix show cannot use that learning to suddenly start making meaningful grocery recommendations. Even these single-instance “narrow” AIs can be impressive, such as IBM’s Watson or Google’s self-driving car tech. 


David Chamberlain, general manager of Licensing Dashboard, says IT can sometimes be over-cautious. “IT people can be worried about things like an Exchange server going down, so there is a tendency to over-provision, and then people will forget to decommission the cloud service when it is no longer needed,” he says. “The cloud is very elastic and is easy to throttle up, which is a big change from on-premise servers.” For Witt, virtual machine (VM) sprawl has always been an issue on-premise, even when companies have had good processes in place. “It is always easier to spin a VM up than it is to decommission it,” he says. “Most datacentres will have a significant proportion of unused VMs – in my experience, it’s around 30-40%.” While on-premise, only a fraction of storage and compute is consumed for these unused VMs, in the cloud, the VM is charged per second, he points out. “You’re charged for the VM size regardless of whether it is fully utilised,” he says.


Reprogrammable quantum computers are the "ultimate goal" of current research.
The ultimate goal of quantum information programming – a device capable of being reprogrammed to perform any given function – is one step closer following the design of a new generation silicon chip that can control two qubits of information simultaneously. The invention, by a team led by Xiaogang Qiang from the Quantum Engineering Technology Labs at the University of Bristol in the UK, represents a significant step towards the development of a practical quantum computing. In a paper published in the journal Nature Photonics, Qiang and colleagues report proof-of-concept of a fully programmable two-qubit quantum processor “enabling universal two-qubit quantum information processing in optics”. The invention overcomes one of the primary obstacles facing the development of quantum computers. Using current technology, operations requiring just a single qubit (a unit of information that is in a superposition of simultaneous “0” and “1”) can be carried out with high precision.


How data breaches are affecting the retail industry

From phishing, vishing and smishing to acquiring consumers’ identification details, or full-blown criminal hacking, the flow of fresh news stories detailing the latest attacks clearly demonstrate the scale of this growing issue. Indeed, such are the risks of data breaches that they are no longer viewed as IT issues, but organisational issues that can derail day-to-day operations and have long-term reputational impact. So, what are the real business costs of a data breach? According to the 2018 Cost of a Data Breach Study by Ponemon Institute, the average cost of a data breach is $3.86 million, which is a 6.4% increase on the 2017 cost of $3.62 million. ... The harsh reality is that no organisation can ever deem itself completely safe and at zero risk of a data breach. However, what you can – and should – do is take a critical look at your infrastructure, processes, systems and controls, and ensure that you have taken steps to address risks and know what to do if you suffer a breach.



Quote for the day:


"Not all readers are leaders, but all leaders are readers." -- Harry S. Truman


Daily Tech Digest - August 21, 2018

Google and banks are being less than truthful about customer tracking

Hacking stealing password data
At least the banks, as far I can tell, didn't say that they weren't tracking people. They merely said nothing about either way. But bank app developers need to remember that banks are in a much more precarious position than Google and they need to at least pretend to be trustworthy in a much more public fashion. Why? Google is still the most effective and comprehensive search engine on the planet. I'd love to be able to say that DuckDuckGo or other privacy-oriented engines are as good or better, but based on daily testing, Google still comes out far ahead. Bing, Yahoo and others long ago lost the search battle to Google. That means that an annoyed Google user can't leave Google without losing some serious search functionality. And on an Android phone, the reliance is even deeper and better integrated. But banks? Not even close. Disgruntled customers can easily take their money and data and move to the rival bank across the street, and they will likely suffer no disruption or degradation of services.



Closeup of woman hand backing up android phone
If you lose your Android phone or decide to move to another, there's a decent chance your existing text messages will vanish into the digital ether. That might be fine (and hey, who knows, maybe even a positive thing), but if you do want to back up and save your SMS data, it's pretty painless to do. The simplest way is to use a messaging app that does all the heavy lifting for you. If you have one of Google's Pixel phones, Google's own free Android Messages app will automatically back up some of your messages — up to 25MB worth, according to Google, and only SMS texts (not MMS media messages). It's preinstalled as the default messaging app on your device, so you don't have to do anything to get it up and running. If you're using a phone other than a Pixel — or if you're using a Pixel and want something a bit more robust — the third-party Pulse SMS app is an excellent next-level option. In addition to providing its own universally available automatic cloud backup and sync system, it offers plenty of opportunities for customization



The biggest risk in cloud computing is not doing it

The biggest risk in cloud computing is not doing it
First, there are risks of changing any aspect of IT, as we saw when moving to the PC, LANs, client/server, mobile, and the web—all things that made us rethink IT yet again, as well as drive change that also drives risk. Second, if businesses did not take risk then nothing would change—and they would die. So, the cost of risk should always be offset by the value gained in taking the risk. In the case of cloud computing, its better operational efficiency leads to lower operational costs. And cloud computing also improves business agility to better react to market changes and expand quickly as the business grows. These are all game-changers and value drivers for cloud computing. Third, risk can be reduced with planning. That means taking the time to figure out what your issues are, how technology such as cloud computing can address your issues (if it can), and how to reduce the risks in doing so. Security, for example, is always a risk. But addressed with the right approaches and technologies, youre cloud-based system will actually be more secure than your “as is” on-premises systems.


What is data deduplication, and how is it implemented?


The usual way that dedupe works is that data to be deduped is chopped up into what most call chunks. A chunk is one or more contiguous blocks of data. Where and how the chunks are divided is the subject of many patents, but suffice it to say that each product creates a series of chunks that will then be compared against all previous chunks seen by a given dedupe system. The way the comparison works is that each chunk is run through a deterministic cryptographic hashing algorithm, such as SHA-1, SHA-2, or SHA-256, which creates what is called a hash. For example, if one enters “The quick brown fox jumps over the lazy dog” into a SHA-1 hash calculator, you get the following hash value: 2FD4E1C67A2D28FCED849EE1BB76E7391B93EB12 If the hashes of two chunks match, they are considered identical, because even the smallest change causes the hash of a chunk to change. A SHA-1 hash is 160 bits. If you create a 160-bit hash for an 8 MB chunk, you save almost 8 MB every time you back up that same chunk. This is why dedupe is such a space saver.



Cybersecurity is a proactive journey, not a destination

The topic itself is broad and expansive, and the true impact of this segment of computing will be around for generations to come. For strong perspective on where the industry stands in its current state, ISACA’s State of Cybersecurity 2018 research is a must-read. This report provides a great assessment of what needs to happen in the cybersecurity field to move from reactive to proactive. Challenges around cybersecurity are not new and have actually been around since the dawn of computing. However, it is now a topic that everyone talks about. It is a board topic, it is a public safety and livelihood topic, and it is a personal topic. Hitting this trifecta of impact has finally created the sense of urgency and the attention that is needed. Now, the key is that as an industry, as a country, and as a world of over 7 billion people, we need to effectively address these industry challenges to preserve the computing environment for the future.


Gartner recommends CIOs get skilled up on deep learning


“CIOs and technology leaders should always be scanning the market along with assessing and piloting emerging technologies to identify new business opportunities with high-impact potential and strategic relevance for their business.” Walker said CIOs or a business decision maker can use predictions like the emerging trends of Hype Cycle as a reality check, helping them to prioritise what areas are likely to become established in the near future. “Some of these capabilities are being delivered in a rapid fashion,” said Walker. Gartner’s predictions show that some technologies, particularly in the AI space such as deep learning, virtual assistants and custom silicon for AI, are likely to become mainstream within two to five years, which does not give CIOs much time to get ready. As an example, Walker said the hospitality sector is being disrupted, such as at the Marriott hotel, which is building service bots to deliver room service.


Establish a data classification model for cloud encryption


Behind the scenes, a data classification model should include metadata that sticks with the newly created document throughout its life. This requires an organization to permanently link the document with immutable metadata -- which is where information management systems, such as those from M-Files and FileHold, come into play. By having users choose a template with its associated metadata, data can then be encrypted as required before it hits any storage media, whether that is a local device, an on-site system or a cloud platform. Anything that isn't open/public -- sticking with the example above – will then be encrypted or dealt with using virtual private networks (VPNs). This approach can also help a business determine if data should primarily be held on premises or in the cloud. Once the basic metadata is created in an immutable manner, users can add extra metadata for further classification. 


This smart bandage can help diagnose and treat your injuries

smartbandage2.png
The bandage is the culmination of over six years' work between Tufts and other higher education institutions to create a bandage that includes sensors to monitor a number of markers showing that show how well, or otherwise, a wound is healing, alongside a drug delivery mechanism - all in a form factor that's flexible enough to be wrapped around a wound. "Chronic wounds are a very biologically complex system, and you have to have the bandage interface in very close contact with the wound so you can monitor whether the wound is healing. At the same time, we wanted to find out if there was a way to intervene at the right time to accelerate wound healing," Sameer Sonkusale, professor of electrical and computer engineering at Tufts University's School of Engineering, told ZDNet. The bandage is a combination of a cloth layer and an electronics layer. The electronics layer includes sensors that track the pH and temperature of the wound -- a higher than normal pH or temperature indicates it's not healing well.


Fiber transmission range leaps to 2,500 miles, and capacity increases

Researchers work to improve fiber transmission efficiency, throughput
Signal noise and distortion have always been behind the limits to traditional (and pretty inefficient) fiber transmission. They’re the main reason data-send distance and capacity are restricted using the technology. Experts believe, however, that if the noise that’s found in the amplifiers used for gaining distance could be cleaned up and the signal distortion inherent in the fiber itself could be eliminated, fiber could become more efficient and less costly to implement. Plus, if fiber could carry more traffic in single strands, it would be cheaper to power, and it would also keep up with rapidly escalating future internet growth. Those two areas of improvement are where many scientists are concentrating their fiber development efforts. The researchers at Chalmers University of Technology and Tallinn University of Technology said they can now send data 4,000 kilometers (nearly 2,500 miles) — or roughly the air-travel distance from Los Angeles to New York.


How to overcome the potential for unintended bias in data algorithms

What makes an algorithm “fair?” Let’s say I have a lot more data besides income - things like credit score, job history, etc. I have a large dataset of past outcomes to train an algorithm for future use. Aiming for accuracy alone will almost definitely result in different treatment of people along age, race, and gender lines. To be fair, should I aim to approve the same percentage of people from each class, even if that means taking some risks? Alternatively, I could train my algorithm to equalize the percentage of people from each class that get approved who actually paid back their loan (the true-positive rate which we can estimate from historical data). A bit of a catch - if I do either of these things, I would have to hold the different groups to different standards. Specifically, I would have to say that I will issue a loan to someone of a certain class, but not to someone else of a different class with the exact same credentials, leading to yet another unfair scenario.



Quote for the day:


"Making those around you feel invisible is the opposite of leadership." -- Margaret Heffernan


Daily Tech Digest - August 20, 2018

mobile apps crowdsourcing via social media network [CW cover - October 2015]
No matter how good your internal IT security team is, no matter whether you have an internal or external pentesting team, you need a bug bounty program and responsible vulnerability disclosure program as a key part of your IT security. I’ve been with firms that decided, wrongly, they didn’t need a bug bounty program. Each, after years of negative lessons learned, started a bug bounty program. They could have saved themselves some pain by starting one earlier. Every company should consider and deploy all three of these types of programs. I’ve known many otherwise good-hearted hackers who grew frustrated, and even resentful, because a company didn’t have an easy way to report a bug they found, didn’t effectively respond to the outreach, or incorrectly told the hacker that their big find wasn’t a big deal. If you make it hard for good people to report serious things, you’re just asking for trouble. If you don’t already have these functions as a mature part of your organization, you can only benefit by getting involved with a company, crowdsourcing or not, that can help you to set them up.



Why CIOs Haven’t Mastered the Elusive Economics of Innovation

The good news is that cloud computing, including a new breed of cloud-based autonomous (self-tuning, self-repairing, self-updating) platform services powered by machine learning, finally gives CIOs the needed technical framework to start pulling their organizations out of the 80/20 spending rut and accelerate their pace of innovation. By offloading much of the onerous maintenance and security work to expert cloud service providers or to the cloud systems themselves, IT organizations can “free up their imaginations,” while getting access to a range of emerging technologies, says Oracle Senior Vice President Steve Daheb. Consider an HR example. At auto parts retailer AutoZone, newly automated processes for employee background checks and onboarding, made possible by its Oracle HCM Cloud application, already are freeing up company HR and store managers to do less administrative work and more value-added work, such as identifying candidates who are a good fit with the company’s distinctive, go-the-extra-mile customer service culture.


In UK: 1 out of every 3 Business had Cryptojacking Malware Infection

In UK 1 out of every 3 Business had Cryptojacking Malware Infection
Citrix Research, in their study, has revealed that a third of the large UK companies were affected by Cryptojaking incidents in July 2018. The survey was participated by 750 British IT leaders, cryptojacking steals processing cycles from workstations, servers, IoT devices and other computing devices in order to collectively mine cryptocurrency. Instead of an elaborate malware with complex functionality, the cybercriminals create and or take-over a legitimate website for it to host cryptojacking virus, which will do hashing attempts in hopes to mine cryptocurrency at the expense of the machine. All of these mining events happen without the users realizing its presence, a stark contrast to ransomwares that by design need to announce its existence to the users. The period of time between infection and eventual detection is wider with cryptojacking malware. Bitcoin and its derivatives are mined using a computing device, but it needs enough time and processing power to do so these days. The longer the detection time, the better chance that the cryptojacking malware will successfully mine virtual coins.


The Usability of Cryptocurrency

If cryptocurrency is going to live up to its hype, it will need to attract users from all professions, backgrounds, and ages. Today, according to an eToro report, most cryptocurrency users are 18- to 35-year-old males working in sales, marketing, IT, and financial services. In other words, cryptocurrency is a trend for people who are already working in a tech-savvy environment. But if such a system’s orientation process targets only these users, anyone who is not tech savvy would likely be lost from the very beginning. Of course, this barrier to entry is just one part of a bigger problem. Crypto enthusiasts are aware that these currencies need to reach a critical mass of users before they are really useful as currencies. Currently, even those who are creating accounts and purchasing cryptocurrency frequently are not using cryptocurrency for its ostensible purpose—buying things! Some individuals may have gotten rich by treating cryptocurrency as an investment vehicle and riding early speculative fluctuations, but this actually presents yet another obstacle for potential users. Investment markets are not user friendly.


Australian Teenager Pleads Guilty to Hacking Apple

Australian Teenager Pleads Guilty to Hacking Apple
The teenager, who legally cannot be named because he is a juvenile offender, pleaded guilty in Australian Children's Court on Thursday to multiple hack attacks against Apple as well as to downloading 90 GB of sensitive information from the company and accessing customers' accounts, Melbourne, Australia-based daily newspaper The Age reported, citing statements made in court. The report says that the boy began his year-long hacking spree when he was 16 years old, motivated in part by his love of Apple gear and hope to one day work for the technology giant. The court heard that after a tipoff from the FBI, the Australian Federal Police last year obtained a search warrant and raided the teenager's family home in Melbourne. "Two Apple laptops were seized and the serial numbers matched the serial numbers of the devices which accessed the internal systems," a prosecutor told the court, The Age reported.


Brendan Eich on JavaScript’s blessing and curse

Being the creator of JavaScript has been a blessing and a curse for Brendan Eich. On the one hand, JavaScript has the distinction of being the most popular programming language in the world. On the other, no language has been the target of more snark. Eich is well aware of the language’s drawbacks—after all, in 1995, he worked around the clock to create JavaScript in a mere 10 days. In this lively interview with IDG’s Eric Knorr, Eich readily admits to JavaScript’s flaws and talks frankly about what he might have done better, while touching on JavaScript’s improvements over its 23-year lifespan. Warts and all, JavaScript has indeed become “the assembly language of the web.” ... WebAssembly supports more than 20 languages, not just JavaScript, opening the ability to write and compile fast web applications to developers of all stripes—and causing many to predict WebAssembly will be central to the future web development.


Intel buys deep-learning startup Vertex.AI to join its Movidius unit

2018 Sundance Film Festival – General Atmosphere
“There’s a large gap between the capabilities neural networks show in research and the practical challenges in actually getting them to run on the platforms where most applications run,” Ng noted in a statement on the company’s launch in 2016. “Making these algorithms work in your app requires fast enough hardware paired with precisely tuned software compatible with your platform and language. Efficient plus compatible plus portable is a huge challenge—we can help.” For Intel, this could mean using Vertex’s IP to help build its own applications, or potentially applications for of its customers. It’s not clear how much funding Vertex.AI had raised. Investors included Curious Capital, which focused on pre-seed and seed-stage funding for startups in the Pacific Northwest; and the Creative Destruction Lab, an accelerator focused on machine learning startups based in Toronto. Intel doesn’t break out revenues specifically for its Artificial Intelligence Product Group, a business unit it established in March 2017


Can the police search your phone?

police homeland security
The Australian government on Tuesday proposed a law called the Assistance and Access Bill 2018. If it becomes law, the act would require people to unlock their phones for police or face up to ten years in prison (the current maximum is two years). It would empower police to legally bug or hack phones and computers. The bill would force carriers, as well as companies such as Apple, Google, Microsoft and Facebook, to give police access to the private encrypted data of their customers if technically possible. Failure to comply would result in fines of up $7.3 million and prison time. Police would need a warrant to crack, bug or hack a phone. The bill may never become law. But Australia is just one of many nations affected by a new political will to end smartphone privacy when it comes to law enforcement. If you take anything away from this column, please remember this: The landscape for what’s possible in the realm of police searches of smartphones is changing every day.


GDPR: Data Protection Is Only The Tip Of The Iceberg

data management, data ownership, privacy, right to be forgotten, GDPR, compliance, data governance
In almost every type of business process, unstructured information is created, required, or exchanged. And while the creator or recipient of that content will likely understand its full context and thus its importance, only too soon that memory fades, and the content is effectively lost to the organization. Even if an individual recollects the content’s existence and location, no connection is maintained between the content itself and the context of the business process that made it relevant in the first place. Further complicating matters, stakeholders – increasingly spread across various global locations – often collaborate using multiple environments or applications, making complete visibility nearly impossible. What’s more, because the majority of team communication occurs through email, a lot of project-relevant content and key audit-trail information is lost or invisible through normal productivity tools.


To succeed at digital transformation, do a better job of data governance

Regardless of the reason an organization undertakes a digital transformation—be it to glean operational insights, change the way it engages with customers or to set the stage for other emerging technologies such as machine learning and artificial intelligence—it needs reliable data as its foundation. And that requires robust data governance. Some consider data governance essential only for cross-departmental collaboration—such as sharing customer data. But it also plays a key role in turning taking seemingly unrelated sources of data and turning into insightful sources of information. Data governance uses a set of defined roles, processes and policies to help manage data assets and ensure their integrity, accuracy and security. Without these structures and controls, data assets lose much of their strategic value. Without effective data governance, no-one can be certain about what data assets a company has, who controls them, what information they can provide and how they should be used.



Quote for the day:


"Leadership is intangible, and therefore no weapon ever designed can replace it." -- Omar N. Bradley


Daily Tech Digest - August 19, 2018


GPUs are already not commoditized relative to CPUs, and what we’re seeing with the huge surge of investment in AI chips is that GPUs will ultimately be replaced by something even more specialized. There is a bit of irony here considering Nvidia came into existence with the premise that Intel’s x86 CPU technology was too generalized to meet the growing demand for graphics intensive applications. This time, neither Intel nor Nvidia are going to sit on the sidelines and let startups devour this new market. The opportunity is too great. The likely scenario is that we’ll see Nvidia and Intel continue to invest heavily in Volta and Nervana. AMD has been struggling due to interoperability issues but will most likely come up with something usable soon. Microsoft and Google are making moves with Brainwave and the TPU, and a host of other projects; and then there are all the startups. The list seems to grow weekly, and you’d be hard-pressed to find a venture capital fund that hasn’t made a sizable bet on at least one of the players.



Will AI Disrupt Our Financial Systems?


According to the WEF report, “unlocking the full potential of AI requires an extensive network of partnerships.” Financial institutions will become “ecosystem curators” with “massive scale of data and insight.” China-based Ping An’s One Connect provides hundreds of small and medium-sized banks services developed on AI technology. The company has data from “over 880 million users, 70 million businesses and 300 partners” that fuel its suite of applications for finance, insurance, payments, and even telemedicine. Partnerships between financial service institutions and technology companies are on the rise. For example, WEF highlights the recently announced joint venture between Amazon, Berkshire Hathaway and JPMorgan Chase to develop a health plan for employees. The challenges of this new dynamic include protecting proprietary data between institutional partners, selecting the right services that will generate revenue, and regulating third-party services.



TensorFlow 2.0 Is Coming; Here’s What You Should Look Forward To

TensorFlow would soon be holding a series of public design reviews covering the planned changes. “This process will clarify the features that will be part of TensorFlow 2.0, and allow the community to propose changes and voice concerns,” said Wicke. Wicke also added that to ease the transition, they would be creating a conversion tool which would update Python code to use TensorFlow 2.0 compatible APIs, or at least warn the user in cases where such a conversion is not possible automatically. “TensorFlow’s tf.contrib module has grown beyond what can be maintained and supported in a single repository. Larger projects are better maintained separately, while we will incubate smaller extensions along with the main TensorFlow code. Consequently, as part of releasing TensorFlow 2.0, we will stop distributing tf.contrib. We will work with the respective owners on detailed migration plans in the coming months, including how to publicise your TensorFlow extension in our community pages and documentation,” Wicke concluded.


alibaba sues alibabacoin Foundation Cryptocurrency ICO News today
Alibaba, through its subsidiary Lynx International, integrated blockchain technology to track information in its cross-border logistics services. With the successful application of blockchain, Lynx can all keep an immutable record of shipment information such as production, transportation, customs, inspection and any third party verification. Although the concept of blockchain has only recently started to emerge, it has a very wide range of applications” Tang Ren -The technical Director, Lynxx  For a shipping and logistics arm like Lynx, security and transparency cannot be overemphasized. It’s really no surprise Alibaba looked no further than blockchain. More recently, another of Alibaba’s subsidiaries, T-Mall in partnership with Cainiao adopted blockchain technology for its cross-border supply chain. Similar to the Lynx project, blockchain is being used to track information about shipments from over 50 countries.



Is data science a bubble?

Though that sounds like prime bubble, I’m actually pretty optimistic. Growing data means growing opportunities — it all just needs good management. My friend, for example, ended up conquering a lot of his problems by recognizing that the rest of his organization needed training in how to work with data scientists. Since then, his teams have been more thoughtful about how to assign work and great things followed. Training decision-makers in how to make use of data science saved the day!
Check that your decision-makers have the right skills for working with data scientists. If a bubble exists, that might be the root of it. The challenge for today’s data science leaders is help decision-makers get training like that, creating more people with the skills to point the technical brilliance of data scientists in valuable directions. Once data scientists are able to make themselves useful, keeping them around becomes a no-brainer, rather than a matter of fashion. Will we manage it before their data scientist title falls out of favor and they scramble towards another rebranding?


CBP launching blockchain testing

CBP launching blockchain testing
“Data without borders,” a fundamental principle of blockchain technology, “sounds good if you’re only looking at shipping, but you have to take into account that we have importation entry data coming in, and we have 47 agencies ... that aren’t just going to give up their sovereignty of their laws and their rules,”Annunziato said. “So it’s a very interesting time right now, but I think it’s a good time for the government to be involved because we’re starting to really push forward and make sure things are honest and working the way they’re supposed to.” CBP also is working with the Commercial Customs Operations Advisory Committee (COAC) on a proof-of-concept exercise exploring the use of blockchain in the intellectual property environment to identify IP licensees and licensors, Annunziato said. “So if you have a rights holder that is granting licenses to Company A, and then did they also grant the right for Company A to license out? You can now follow generationally what’s going on. So in a way the government’s got a view of that interaction with the company, and we see it as a worthwhile venture for the rights holders,” he said.


How to prevent phishing by studying the psychology behind digital fraud

phishingcarnegiemellonuniversityresearch.jpg
Two researchers working in Carnegie Mellon University's Department of Social and Decision Science decided to look beyond the reasons why users fall for online fraud attacks. "Psychological research on human adversarial behavior is necessary to uncover factors that determine how deception and phishing strategies originally manifest in phishing emails," explain Prashanth Rajivan and Cleotilde Gonzalez in their coauthored paper Creative Persuasion: A Study on Adversarial Behaviors and Strategies in Phishing Attacks. "Currently, there is a severe lack of work on the psychology of criminal behaviors in cybersecurity." The two decided to change that, looking specifically at: The importance of incentives; How much of a role creativity plays; and The effect of adversarial strategies on attack success. To determine the importance of each item above, Rajivan and Gonzalez developed a two-part experiment consisting of these phases.


Google Is Turning Itself Into An AI Company

To maintain its foothold and protect its main source of revenue, Alphabet (Google’s parent company) is positioning itself to dominate adjacent sectors — such as digital commerce, branded hardware products, and content — and attempting to integrate its services into every aspect of the digital user experience. The company is also seeking out new streams of revenue in sectors with large addressable markets, namely on the enterprise side with cloud computing and services. Furthermore, it’s looking at industries ripe for disruption, such as transportation, logistics, and healthcare. Unifying Alphabet’s approach across initiatives is its expertise in AI and machine learning, which the company believes will help it become an all-encompassing service for both consumers and enterprises. In this teardown, we dive into Google’s approach to maintaining its search platform dominance, outlining the strategic investments, acquisitions, and partnerships across its top priorities moving forward.


On Becoming A Scientific HR Function – Learning From Amazon And Google


As Amazon puts it, “We manage HR as a business.” So, rather than simply “aligning with business goals,” the scientific model focuses on HR actions and resources so that they produce the maximum direct, measurable impact on business results. The two primary areas where HR can traditionally produce the highest business impacts include increasing the productivity of the workforce and improving the volume and speed of product and process innovation. Under this scientific approach, HR focuses on solving broad strategic business problems (e.g., decreased sales, product development or missed deadlines) rather than tactical HR problems. And finally, under this model, HR problems and results are converted to their dollar impact on revenue (e.g., the retention efforts on salespeople allowed us to maintain $2.5 million in sales revenue). Reporting results in revenue impact dollars allow executives to quickly compare HR’s dollar impacts to those from other business functions.


Hybrid optical-electronic CNN with optimized diffractive optics for image classification

Convolutional neural networks (CNNs) excel in a wide variety of computer vision applications, but their high performance also comes at a high computational cost. Despite efforts to increase efficiency both algorithmically and with specialized hardware, it remains difficult to deploy CNNs in embedded systems due to tight power budgets. Here we explore a complementary strategy that incorporates a layer of optical computing prior to electronic computing, improving performance on image classification tasks while adding minimal electronic computational cost or processing time. We propose a design for an optical convolutional layer based on an optimized diffractive optical element and test our design in two simulations: a learned optical correlator and an optoelectronic two-layer CNN. We demonstrate in simulation and with an optical prototype that the classification accuracies of our optical systems rival those of the analogous electronic implementations, while providing substantial savings on computational cost.



Quote for the day:


"Good leaders make people feel that they're at the very heart of things, not at the periphery." -- Warren G. Bennis