Daily Tech Digest - October 23, 2018

A New Era of Adobe Document Cloud and Acrobat DC

PDF started out as a noun, but quickly became a verb. The adjective “portable,” an essential descriptor in the pre-internet era of 1993, doesn’t begin to capture all the rich capabilities of today’s PDF. The PDF is still portable of course, but it’s also editable, reviewable, reliable and sign-able. And it’s universal: in the last year alone, some 200 billion PDFs were opened in Adobe products alone. And we’re accelerating the pace of innovation in PDF and the role it will play for future generations. Today, we’re announcing major advancements in Adobe Document Cloud and Acrobat DC, redefining what’s possible with PDF. This launch includes new PDF share and review services, dramatic enhancements to touch-enabled editing on tablets, and a redesigned way to send documents for signature with Adobe Sign, all built in to Acrobat DC. It’s all about making it easier to create, share and interact with PDFs, wherever you are. New Adobe Sensei-powered functionality automates repetitive tasks and saves time, whether you’re scanning a document, filling out a form or signing an agreement.


Establishing an AI code of ethics will be harder than people think

In an attempt to highlight how divergent people’s principles can be, researchers at MIT created a platform called the Moral Machine to crowd-source human opinion on the moral decisions that should be followed by self-driving cars. They asked millions of people from around the world to weigh in on variations of the classic "trolley problem" by choosing who a car should try to prioritize in an accident. The results show huge variation across different cultures. Establishing ethical standards also doesn’t necessarily change behavior. In June, for example, after Google agreed to discontinue its work on Project Maven with the Pentagon, it established a fresh set of ethical principles to guide its involvement in future AI projects. Only months later, many Google employees feel those principles have been placed by the wayside with a bid for a $10 billion Department of Defense contract. A recent study out of North Carolina State University also found that asking software engineers to read a code of ethics does nothing to change their behavior.


Quantum computing: A cheat sheet


Using quantum computation, mathematically complex tasks that are at present typically handled by supercomputers — protein folding, for example — can theoretically be performed by quantum computers at a lower energy cost than transistor-based supercomputers. While current quantum machines are essentially proof-of-concept devices, the algorithms which would be used on production-ready machines are being tested presently, to ensure that the results are predictable and reproducible. At the current stage of development, a given problem can be solved by both quantum and traditional (binary) computers. As manufacturing processes used to build quantum computers is refined, it is anticipated that they will become faster at computational tasks than traditional, binary computers. Further, quantum supremacy is the threshold at which quantum computers are theorized to be capable of solving problems, which traditional computers would not (practically) be able to solve. Practically speaking, quantum supremacy would provide a superpolynomial speed increase over the best known (or possible) algorithm designed for traditional computers.


The Untapped Potential in Unstructured Text


Unstructured text is the largest human-generated data source, and it grows exponentially every day. The free-form text we type on our keyboards or mobile devices is a significant means by which humans communicate our thoughts and document our efforts. Yet many companies don’t tap into the potential of their unstructured text data, whether it be internal reports, customer interactions, service logs or case files. Decision makers are missing opportunities to take meaningful action around existing and emerging issues. ... Natural language processing (NLP) is a branch of artificial intelligence (AI) that helps computers understand, interpret and manipulate human language. In general terms, NLP tasks break down language into shorter, elemental pieces, and tries to understand relationships among those pieces to explore how they work together to create meaning. The combination of NLP, machine learning and human subject matter expertise holds the potential to revolutionize how we approach new and existing problems.


How AI Is Shaking Up Banking and Wall Street

San Francisco–based Blend, for one, provides its online mortgage-application software to 114 lenders, including lending giant Wells Fargo, shaving at least a week off the approval process. Could it have prevented the mortgage meltdown? Maybe not entirely, but it might have lessened the severity as machines flagged warning signs sooner. “Bad decisions around data can be found instantaneously and can be fixed,” says Blend CEO and cofounder Nima Ghamsari. While banks are not yet relying on A.I. for approval decisions, lending executives are already observing a secondary benefit of the robotic process: making home loans accessible to a broader swath of America. Consumers in what Blend defines as its lowest income bracket—a demographic that historically has shied away from applying in person—are three times as likely as other groups to fill out the company’s mobile application. Says Mary Mack, Wells Fargo’s consumer banking head: “It takes the fear out.”


The Hidden Risks of Unreliable Data in IIoT

The Hidden Risks of Unreliable Data in IIoT
One of the key goals of Industry 4.0 is business optimization, whether it’s from predictive maintenance, asset optimization, or other capabilities that drive operational efficiencies. Each of these capabilities is driven by data, and their success is dependent on having the right data at the right time fed into the appropriate models and predictive algorithms. Too often data analysts find that they are working with data that is incomplete or unreliable. They have to use additional techniques to fill in the missing information with predictions. While techniques such as machine learning or data simulations are being promoted as an elixir to bad data, they do not fix the original problem of the bad data source. Additionally, these solutions are often too complex, and cannot be applied to certain use cases. For example, there are no “data fill” techniques that can be applied to camera video streams or patient medical data. Any data quality management effort should start with collecting data in a trusted environment.


AI can’t replace doctors. But it can make them better.


It’s not that we don’t have the data; it’s just that it’s messy. Reams of data clog the physician’s in-box. It comes in many forms and from disparate directions: objective information such as lab results and vital signs, subjective concerns that come in the form of phone messages or e-mails from patients. It’s all fragmented, and we spend a great deal of our time as physicians trying to make sense of it. Technology companies and fledging startups want to open the data spigot even further by letting their direct-to-consumer devices—phone, watch, blood-pressure cuff, blood-sugar meter—send continuous streams of numbers directly to us. We struggle to keep up with it, and the rates of burnout among doctors continue to rise. How can AI fix this? Let’s start with diagnosis. While the clinical manifestations of asthma are easy to spot, the disease is much more complex at a molecular and cellular level. The genes, proteins, enzymes, and other drivers of asthma are highly diverse, even if their environmental triggers overlap.


4 steps for solution architects to overcome RPA challenges


As important as it is for solution architects to talk about the technology, it’s equally crucial that they listen to concerns too. Especially when discussing automation – or even just mentioning the word robotics, architects must realize it’s natural for some groups to have anxiety about being replaced or their positions eliminated. This belief often stems from a lack of education about the technology. In times like these, solution architects need to understand where clients are coming from and leverage the questions as opportunities to communicate the real benefit of RPA: which is to make employee lives easier by removing tedious tasks like data entry that most workers don’t enjoy. Managers also need to know that this level of automation will likely result in greater trust in the accuracy of the data, which is something that can’t be easily assessed when humans are doing the data extraction and entry. Overcoming executives’ fears about job security is vital to a successful implementation, because with better client education, comes more comfort with RPA


Feds Charge Russian With Midterm Election Interference

It's unclear how prosecutors zeroed in on Khusyaynova, who presumably remains in Russia. The U.S. and Russia do not have an extradition treaty, which means she can likely avoid arrest indefinitely, provided she remains there. The complaint against Khusyaynova contains a surprising amount of detail about how Project Lakhta was funded and its finances managed. The level of detail suggests that investigators managed to work with a source who was part of or closely affiliated with the project. The complaint describes financial documents, emails and paperwork that allegedly lay bare the source of the project's funding and its aims. The Justice Department alleges Project Lakhta was funded and ultimately controlled by a trio of companies operating under the name of Concord, run by Yevgeniy Viktorovich Prigozhin, described in the complaint as being "a Russian oligarch who is closely identified with Russian President Vladimir Putin."


The OSI model explained: How to understand the 7 layer network model

OSI model
For IT professionals, the seven layers refer to the Open Systems Interconnection (OSI) model, a conceptual framework that describes the functions of a networking or telecommunication system. The model uses layers to help give a visual description of what is going on with a particular networking system. This can help network managers narrow down problems (Is it a physical issue or something with the application?), as well as computer programmers (when developing an application, which other layers does it need to work with?). Tech vendors selling new products will often refer to the OSI model to help customers understand which layer their products work with or whether it works “across the stack”. Conceived in the 1970s when computer networking was taking off, two separate models were merged in 1983 and published in 1984 to create the OSI model that most people are familiar with today. Most descriptions of the OSI model go from top to bottom, with the numbers going from Layer 7 down to Layer 1.



Quote for the day:


"Have courage. It clears the way for things that need to be." -- Laura Fitton


Daily Tech Digest - October 22, 2018

Saudi SoftBank relationship and the Tesla miracle, are techs set to boom or crash? image
Human nature is a funny thing, and often has little to do with rational decision making. In 2008, the global economy descended into crisis, for a while it felt as if capitalism itself was tottering. The Queen of England famously asked: “Why didn’t anyone see it coming?” Actually, many did. But as a whole, economists and politicians who warned of a major crisis in the making, fell under the media radar or were dismissed as doomsayers. Yet, even among these ‘Cassandras’, few anticipated the full extent of the crisis to follow. There may be a good reason for this. A new book, Crisis of Beliefs: Investor Psychology and Financial Fragility, by the economists Nicola Gennaioli and Andrei Shleifer, argues that one of the reasons why the 2008 crisis was so severe is that people changed. It seems that human psychology may at least partially explain the crash of 2008, investors beliefs on the level of debt or leverage that was sustainable, for example, changed. And that’s the tricky thing about predicting stock markets. Human nature, especially when it is aggregated and subjected to forces such as group think, is notoriously difficult to understand, let alone predict.


Firms need stronger metrics and skills to outpace cyber threats

The use of security metrics and the formation of security teams should be viewed as complementary activities, though for many organizations some upskilling will be necessary, Robinson explained. "Foundational skills such as network security, endpoint security and threat awareness still form the bedrock of a strong team,” Robinson said. “But as the cloud and mobility have become ingrained into IT operations, other skills have taken on equal or greater importance.” In order to acquire the security skills organizations require, many are primarily looking to train current employees or expand their use of third-party security expertise. New hires and new partnerships are usually secondary considerations, Robinson explained. When it comes to the use of external resources, 78 percent of companies rely on outside partners for some or all of their security needs. Many firms rely on more than one partner, another indicator of the complexity of cybersecurity, Robinson explained.


FDA Calls for 'Cybersecurity Bill of Materials' for Devices

FDA Calls for 'Cybersecurity Bill of Materials' for Devices
"Because of the rapidly evolving nature of cyber threats, we're updating our [premarket] guidance to make sure it reflects the current threat landscape so that manufacturers can be in the best position to proactively address cybersecurity concerns when they are designing and developing their devices," says FDA Commissioner Scott Gottlieb, M.D. "This is part of the total product lifecycle approach to device safety, in which manufacturers must adequately address device cybersecurity from the design phase through the device's time on the market to help ensure patients are protected from cybersecurity threats." The draft guidance provides updated recommendations on cybersecurity considerations for device design, labeling and documentation that should be included in premarket submissions for agency approval of medical devices that have cybersecurity risk, FDA notes. The agency will conduct a public workshop for industry stakeholders on Jan. 29-30, 2019, to discuss the newly released draft guidance before it's finalized.


To beat Bloomberg, Symphony is letting banks’ bots talk with each other

“If you can create a whole ecosystem that connects every individual without dropping anyone, you can create a network much greater than what Bloomberg has done,” Gurle said in an interview on the sidelines of Symphony’s Innovate conference in New York recently. “The key is openness.” Gurle compares Symphony to America’s interstate highway system. In that analogy, the banks are cities and towns, and use their own cars to travel on a network Symphony has built. The advantage is that banks can use their own proprietary systems and still interface with other systems. Using Symphony, banks are deploying chatbots that “talk” amongst themselves to make and settle trades. Bots at RBC and AllianceBernstein, for example, can execute trades with each other over the Symphony platform, while BlackRock and BNP Paribas use them to settle mismatched foreign-exchange swaps.


How to make automation part of your microservices security


A modern application stack has four layers: infrastructure, data, networking and application code. At each of these layers, containers and microservices introduce a new way to deliver apps. As a result, container orchestration tools like Kubernetes are central to microservices management. While many security tools that work for standard applications produce effective results when applied to a microservices application, two aspects of microservices require additional attention and protection: application security and container security. Fortunately, there are plenty of advanced automation tools that support the fast and agile requirements of microservices security. Microservices application security is important because it involves multiple services rolled into one app. Those multiple services all work together to deliver a unified experience, and that means it's essential to perform dynamic testing on the services at the application level. In a microservices system, networking occurs between the services, as well as at the instance level.


Samsung Starts Mass Production of Chips Using Its 7nm EUV Process Tech

Samsung’s 7LPP manufacturing technology offers impressive advantages over the company’s 10LPE specifically for mobile SoCs. Meanwhile, in a bid to make the process attractive to a broad range of potential customers, the foundry offers a comprehensive set of design-enablement tools, interface IP (controllers and PHY), reference flows, and advanced packaging solutions. The final PDK is months away, but many customers may start development of their 7LPP SoCs even with existing set of solutions. At this point 7LPP is supported by numerous Samsung Advanced Foundry Ecosystem (SAFE) partners, including Ansys, Arm, Cadence, Mentor, SEMCO, Synopsys, and VeriSilicon. Among other things, Samsung and the said companies offer such interface IP solutions HBM2/2E, GDDR6, DDR5, USB 3.1, PCIe 5.0, and 112G SerDes. Therefore, developers of chips of SoCs due in 2021 and onwards, which will rely on PCIe Gen 5 and DDR5, can start designing their chips right now.


9 Principles of Service Design


It’s important to realize services are not tangible goods. An interface is not a service. A product is not a service. Shostack states, “People confuse services with products and with good manners. But a service is not a physical object and cannot be possessed. When we buy the use of a hotel room, we take nothing away with us but the experience of the night’s stay. When we fly, we are transported by an airplane but we don’t own it. Although a consultant’s product may appear as a bound report, what the consumer bought was mental capability and knowledge, not paper and ink. A service is not a servant; it need not be rendered by a person. Even when people are the chosen means of execution, they are only part of the process.” This makes it quite difficult to design for services. Often, the design of a service is overlooked by organizations and decisions related to the service supporting a product are not routinely considered in relation to how they impact the overall design of an experience. This results, most often, in poor service design and a poor experience.


Why Managed Threat Hunting?

Increasingly, threat hunting is a practice that enterprises want to understand and implement. But it is not always feasible to do so in-house, given the demand for resources and skills. That's where managed threat hunting enters, says CrowdStrike's Jennifer Ayers. Ayers, VP, OverWatch and Security Response at CrowdStrike, says the in-house/managed services decision is becoming a common, pragmatic discussion. "Companies want to be able to build out all this stuff, but in reality, if you only have $100, do you want to focus that $100 on building out a threat hunting organization that might only find evil once or twice a year in your particular environment, or do you want to use that funding to shore up your defense and response to those types of attacks?" In an interview on managed threat hunting, Ayers discusses: Her perspective on threat hunting; In-house vs. outsourced threat hunting; and The latest threats and how to defend against them.


Public cloud management tools lacking, research finds


Network engineer Brian Keys took a look at network resiliency and why it's so difficult for enterprises to have a network that's highly available. For one thing, nobody wants to pay for the technology necessary to achieve that goal. Additionally, finding architects with the experience to design a highly available network isn't easy. Still, Keys said, enterprises can take steps to improve their network's reliability. The use of uninterruptable power supplies is a good approach. So are redundant links for branch office connectivity. But knowing which techniques are necessary and which ones are just nice to have requires careful study. "A competent network designer should be able to tell with a high degree of certainty just how resilient the network is and in which ways," Keys said. "Probably the toughest part is to explain to upper management the pros and cons of the new proposal and get their buy-in."


The Hub of All Things: Are you collecting personal data the wrong way?

Are you collecting personal data the wrong way? image
It’s a radical shift from the way organisations collect and access personal data, but Holtby thinks is better not only for consumers but for organisations too. He explained: “Most companies treat the personal data of their users in a way that is, at best, hamstrung and at worst completely dysfunctional. “I would argue in the future most companies are going to want to have a pretty clear understanding of who their users are and who their customers are. They want to know as much as they can about those people. At best even the very biggest companies, today, have a very limited understanding of who their users are. “The quintessential ‘I know who my user is kind of company’, I would argue at the moment, is Google. Many think of Google as being the company that has the most data about its users. If you are being charitable to Google you could say it knows everything it would possibly want to know about its users, but in reality, all they have is Google’s data.



Quote for the day:


"Leaders must be good listeners. It_s rule number one, and it_s the most powerful thing they can do to build trusted relationships." -- Lee Ellis


Daily Tech Digest - October 21, 2018

onelensfigalwhitneyoct2018.jpg
Do you have a lot of receipts, business cards, or other printed documents that you want to digitize and store? If so, one tool up to this job is Microsoft's free mobile Office Lens app. With Office Lens for iOS or Android, you can use your device's camera to snap a photo of a note, card, or other document. You can capture the image as a whiteboard, a document, a business card, or a photo. Then, you can then edit and revise it by cropping it, flipping it, drawing on it, and adding text to it. When you're done, you can store the image as a PDF file, a Word document, a PowerPoint slide, or a OneNote file. You can also save the image to your mobile gallery or to Microsoft OneDrive; in fact, the latest version of OneDrive for iOS directly integrates Office Lens. Let's look at how to use Office Lens to capture your printed documents. First, download and install the Office Lens app on your iPhone, iPad, or Android device. Open the app and give it the necessary permission to access your photos and camera. 


Continuous Integration at Intel for the Mesa Graphics Library

Mesa CI is a set of configuration files, a job scheduler and a job implementation that can run on Jenkins. Written mostly in Python, it is driven by the principle that "the most important design consideration for the Mesa CI is to minimize configuration in Jenkins". The Mesa CI can theoretically run on top of any CI infrastructure, not just Jenkins, according to the documentation. It’s currently used for developer testing, release verification, pre-silicon (hardware) testing in simulators for Intel drivers, performance testing and validation of conformance test suites. The typical developer testing turnaround time is 30 minutes even though a commit to the master branch kicks off millions of tests. A custom database provides immediate access to test history, and the system also generates performance trend lines for common benchmarks.


Integrating factor of Big data and Artificial intelligence in Business


Now, companies and businesses have the chance to explore the potential of AI with seemingly inexhaustible data as opposed to what was previously obtainable thus unraveling all the intricate aspects of the process. As opposed to depending on sample data, experts are now able to utilize unquantifiable amounts of data. This advancement has propelled enterprises to a point where they can deliver content without any form of irrelevant data while offering higher suggestive and extrapolative data that is valuable for interpreted with the aid of “analytical sandboxes” or big data “centers of excellence”.  ... The landscape of artificial intelligence has experienced an explosive advancement with the accessibility to big data thus triggering disorderly transformations. The widespread explosion of data, in addition to the advancement in the capacity to store and evaluate staggering amount of data with efficiency and pace is directly responsible for propelling the relevancies of AI. This transcends the conventional role of analyzing data. More than ever before, AI is becoming an invaluable tool for accurate assessment and decision taking.


Why Digital Banking Should Include A Human Component

While effective implementation of digital strategy is critical for banks and credit unions, human interaction cannot be ignored. Technology can be used to augment the human experience and empower both customers and employees. A most common example is customer service; while chatbots and AI can be deployed to address most of the use cases, we must ensure that there are options for humans to intervene when needed, as well as human touchpoints throughout the customer journey to build trust and rapport. Bots need to be trained to learn how to empathize, and understand regional and generational differences. After all, such technology should reflect a brand’s identity and can positively (or negatively) impact customer perception. At the end of the day, technology is just a means to an end. The winning formula is not about more or less shiny new toys – but rather, leveraging appropriate technology to meet the needs of customers. The future of finance is also not about having a pretty user interface or making small incremental changes.


How Close Are We to Kubrick's AI-Controlled Vision of the Future?


HAL learned from observing its environment, watching and analyzing the words, facial expressions and movements of the human astronauts on the spaceship. It was responsible for performing rote functions such as maintaining the spaceship, but as a "thinking" computer, HAL also was capable of responding conversationally to the astronauts, Murphy explained. However, when the mission goes awry and the astronauts decide to shut HAL down, the AI discovers their plot by lip-reading. HAL arrives at a new conclusion that wasn't part of its original programming, deciding to save itself by systematically killing off the people onboard. The prospect of AI doing more harm than good may not be that farfetched. Experts suggest that weaponized AI could play a big part in future global conflicts, and the late physicist Stephen Hawking suggested that humanity might soon find AI to be the biggest threat to our survival.


Global Fintech Warning To Traditional Banks -- The Threat Is 'Real And Growing'

The UK fintech scene has been boosted by the local financial watchdog adopting EU so-called open banking rules early, forcing lenders to open up access for fintechs to the data and accounts of any clients who authorize it. Earlier this year the UK government created a crypto-assets task force, updated fintech regulation and built a UK-Australia so-called fintech bridge to help firms expand internationally. The rise in fintech firms and banking startups was sparked by the 2008 global financial crisis, which caused banks to cut back on spending and withdraw from some markets altogether — leaving a vacuum fintech companies stepped into. By using technology to make the finding, registering and lending to new customers quicker and easier these fintech companies have forced the traditional banking industry, which is famously slow to adapt, to react.


Confessions of a UX Designer


Some trends will stay in our profession. Those are usually theories based on sound foundational principles — the principles of our profession from decades of research and application. They’re like 501 Levis and a solid print t-shirt. They’ll never go out of style. But, most trends will fall by the wayside. Following them will often lead you and your project astray. As a general rule of thumb: Stay off the bandwagon, stray from the crowd and step to the beat of your own music — no matter how measured or far away. And as for those trending concepts that do stick around or seem to hold some validity — make sure you aren’t adopting them just because everyone else is. Don’t build a mobile app for a user base who would benefit more from a desktop application. Don’t adopt an idea based on an article you read citing a weak study or misrepresenting a study. In short, critically and strategically evaluate the concepts you add to your repertoire.


Embracing Conflict to Fuel Digital Innovation

When organizations try to determine the economic value of their data (EvD), there arises a nature conflict between 1) keeping all the data because of its potential monetization value versus 2) the potential storage and data management costs, not to mention potential fines and liabilities associated with data security and privacy breeches of that data, which highlights the following conflicts: Maximizing Value – Data assets have considerable potential economic or financial value they can add in terms of new revenue opportunities, process efficiencies, cost reductions, risk mitigation, etc. Monetizing these data sources the key to unlocking the potential in the big data era; and Minimizing Risk – Many organizations do not fully quantify the costs and risks associated with the corporate data. Denial of access to data such as we recently saw with the global wannacry cyberattack, is just one example of the risk inherent in underappreciating reliance on data. Data has both present and future value – and only once that value is fully understood can the risk be mitigated.


Agile Implementation from a Manager's Perspective


What is this strange role of an Agile Coach? On the board of the organization, a group of people who were to carry the torch of agile’s education appeared. They were to stimulate development of the agile development of software in teams, remove obstacles, talk to teams and their manager, and encourage them to think differently about the development of software and discourage the use of old methods. They started talking with my people, asking different questions over a cup of coffee or whatnot. They were sitting together with them in open spaces and observing what was happening. What were they talking about with my people? Or even worse, who were they informing and about what? They sneaked from one meeting to another as a mysterious Agent Smith. What was going on Why change something that works? I did not see the need to introduce Scrum, as my team was reaching the goals set for them without it. Better is an enemy of good ... do not touch it because you will get burnt.


AI, cybersecurity shape the CIO agenda for 2019 as IT budgets rise

Dynamism—the ability to embrace change and adopt technology—is the biggest predictor of digital transformation success, Mike Harris, executive vice president of research at Gartner, said during the opening keynote address at the Gartner Symposium/ITxpo. However, privacy is a top barrier to becoming dynamic. "If you don't successfully master privacy, your entire digital transformation is at risk," Harris said during the keynote. Businesses are increasingly scaling their digital efforts, the survey found: 33% of CIOs worldwide said they had evolved their digital endeavors to scale, up from 17% the year before. The major driver for scale is increasing consumer engagement through digital channels, the survey found. "The ability to support greater scale is being invested in and developed in three key areas: Volume, scope and agility. All aim at encouraging consumers to interact with the organization," Rowsell-Jones said in the release. 



Quote for the day:


"If you can't swallow your pride, you can't lead. Even the highest mountain had animals that step on it." - Jack Weatherford


Daily Tech Digest - October 20, 2018


Habits, it seems, get in the way of change despite our best intentions. “Habits are triggered without awareness — they are repeated actions that worked in the past in a given context or in a similar experience,” she notes. Wood’s research shows that concentrating on changing unwanted behaviors, and then creating new ones — not focusing on motivation — is the key to making change. She cites various efforts aimed at changing smoking habits in the U.S. from 1952 to 1999. Smoking decreased not when smokers were made aware of the health risks, but when buying and smoking cigarettes was made more difficult and less rewarding. Thus, higher taxes, smoking bans in public places, and limits on point-of-purchase ads — which add friction to smoking — were a more effective deterrent than warning labels on cigarette packages and public service advertising about smoking’s negative effects. A similar strategy of changing the context is possible in the workplace: Make old actions more difficult; make new, desired actions easier and more rewarding.


7 Ways A Collaboration System Could Wreck Your IT Security


Before an IT group blithely answers the call for a collaboration system – by which we mean groupware applications such as Slack, Microsoft Team, and Webex Team – it's important to consider the security risks these systems may bring. That's because the same traits that make these, and similar, applications so useful for team communications also make them vulnerable to a number of different security issues. From their flexibility for working with third-party applications, to the ease with which team members can sign in and share data, low transactional friction can easily translate to low barriers for hackers to clear. When selecting and deploying collaboration tools, an IT staff should be on the lookout for a number of first-line issues and be prepared to deal with them in system architecture, add-ons, or deployment. The key is to make sure that the benefits of collaboration outweigh the risks that can enter the enterprise alongside the software.


Apache Kafka: Ten Best Practices to Optimize Your Deployment


A running Apache ZooKeeper cluster is a key dependency for running Kafka. But when using ZooKeeper alongside Kafka, there are some important best practices to keep in mind. The number of ZooKeeper nodes should be maxed at five. One node is suitable for a dev environment, and three nodes are enough for most production Kafka clusters. While a large Kafka deployment may call for five ZooKeeper nodes to reduce latency, the load placed on nodes must be taken into consideration. With seven or more nodes synced and handling requests, the load becomes immense and performance might take a noticeable hit. Also note that recent versions of Kafka place a much lower load on Zookeeper than earlier versions, which used Zookeeper to store consumer offsets. Finally, as is true with Kafka’s hardware needs, provide ZooKeeper with the strongest network bandwidth possible. Using the best disks, storing logs separately, isolating the ZooKeeper process, and disabling swaps will also reduce latency.


The Evolution of Mobile Malware


Mobile malware isn’t just an opportunistic tactic for cybercriminals. Kaspersky Lab is also seeing its use as part of targeted, prolonged campaigns that can affect many victims. One of the most notable discoveries this year was Skygofree. It is one of the most advanced mobile implants that Kaspersky Lab has ever seen. It has been active since 2014, and was designed for targeted cyber-surveillance. It is spread through web pages, mimicking leading mobile network operators. This was high-end mobile malware that is very difficult to identify and block, and the developers behind Skygofree have clearly used this to their advantage: creating and evolving an implant that can spy extensively on targets without arousing suspicion. ... In recent times, rooting malware has been the biggest threat to Android users. These Trojans are difficult to detect, boast an array of capabilities, and have been very popular among cybercriminals. Once an attacker has root access, the door is open to do almost anything.


What is the CMO's Technology Strategy for 2019 and Beyond?

Two iPhones in someone's hand. One of the left says, "Technology is a given" on the screen, the one of the right says, "Not a debate" on the screen
Even the CMOs that don’t have the technological background are becoming more tech savvy. Integrate CMO Vaughan said he considers himself and his colleague marketers technology investors, trying to manage a portfolio of tech to provide efficiency, effectiveness and unique capabilities for the company. “We view technology as an enabler of our strategy and an important part of advancing our marketing capabilities,” Vaughan said. “We have tried to be very disciplined about not buying tech for tech sake, which is not always easy to do today with so many options. We start with the strategy, what we are trying to accomplish and build a roadmap, including ROI and an adoption plan and model for each technology we evaluate.” Vaughan said CMOs should know what is available and at their disposal to differentiate and accelerate their strategy. “This does not mean you have to be a technology expert,” he said.


Privacy, Data, and the Consumer: What US Thinks About Sharing Data

To prevent data being lost or stolen is the most obvious “table stake” for consumers. Just as important is the question of whether marketers should have it in the first place. This links clearly to the likes of GDPR in Europe where the bar has been raised for all organizations around justification of the data they hold. But if we have the right data, for the right reasons, if we keep it safe and if we can make it more transparent how we’re using that data to provide a more respectful, personalized, fairer and rewarding service to the consumer, the trust will grow. Equally, we need to trust the consumer, again by providing transparent access to the data we hold, clarity around how we use it and the ability for them to control their data. Overall, the research shows that while consumers are rightly concerned about data privacy, they are also aware that data is an essential part of today’s economy, with 57% on average, globally, agreeing or strongly agreeing. Factor in the neutrals and around two-thirds of consumers are accepting or neutral around data use in today’s data-driven, data-enabled world.


NHS standards framework aims to set the bar for quality and efficiency


Although most of the standards in the framework aren’t necessarily new, they are “intended to be a clear articulation of what matters the most in our standards agenda, and is accompanied by a renewed commitment to their implementation,” said NHS Digital CEO Sarah Wilkinson in the framework’s foreword. Speaking at the UK Health Show on 25 September, Wilkinson said the potential for use of data in the NHS is huge, but the health service needs to get to grips with standards to reap the benefits.  Most of the standards in the framework, which is currently in beta form and out for consultation, are based in international ones, however some are specialised for the NHS. This includes using the NHS number as a primary identifier – a standard which has been in place for a long time, but has had mixed results in uptake. The framework said the standard “is live now and should be adhered to in full immediately”. 


Open Banking has arrived, whether you like it or not

Australia has introduced Open Banking rules that will force the banks to share data with trusted Third-Party Providers (TPPs) by June 2019; Mexico has introduced a Fintech Law; South Korea and Singapore have enforced rules around financial data sharing between banks and third parties; and the USA has seen several banks innovating around open financial structures, although there is no law enforcing them to do this, yet. What intrigues me about the market movements is that some large financial players are taking a lead in this space, such as Citibank and Deutsche Bank’s open API markets, whilst some are resisting the change. I have heard several reports in the UK that the large banks have made data sharing incredibly difficult for the customer, by making the permissioning process very onerous and time-consuming. Equally, the implementation of European rules under PSD2 has seen several Fintech firms cry foul, as each bank creates its own interpretation, and therefore API interface, of the law.


How Data Changed the World


Running a city is always a challenging task. With Big Data, however, comes new opportunities alongside new challenges. Instead of having to rely on surveys and manually tracking how people move throughout an area, cities can instead rely on sensor-derived data, providing far greater resolution and a pool of data to draw from orders of magnitude larger than ever before available. Many of these advances may seem a bit mundane at first; developing improved traffic routes, for example, is unlikely to garner many headlines. However, these changes lead to concrete improvements, saving travelers time and improving overall quality of life. Furthermore, Big Data-derived improvements can inform city planners when deciding which direction their cities will take in the future. Before launching large and expensive projects, city managers will be able to look at information gleaned from Big Data to determine what the long-term effects will be, potential changing cities in fundamental ways.


Give REST a Rest with RSocket


An often-cited reason to use REST is that it’s easy to debug because its “human readable”. Not being easy to read is a tooling issue. JSON text is only human readable because there are tools that allow you to read it – otherwise it’s just bytes on a wire. Furthermore, half the time the data being sent around is either compressed or encrypted — both of which aren’t human readable. Besides, how much of this can a person “debug” by reading? If you have a service that averages a tiny 10 requests per second with a 1 kilobyte JSON that is the equivalent to 860 megabytes of data a day, or 250 copies of War and Peace every day. There is no one who can read that, so you’re just wasting money. Then, there is the case where you need to send binary data around, or you want to use a binary format instead of JSON. To do that, you must Base64 encode the data. This means that you essentially serialize the data twice — again, not an efficient way to use modern hardware.



Quote for the day:


"Managers maintain an efficient status quo while leaders attack the status quo to create something new." -- Orrin Woodward


Daily Tech Digest - October 19, 2018

McAfee researchers uncover ‘significant’ espionage campaign


The researchers believe the new version could only have been created by having access to the original source code, which has been modified to make the malware more able to avoid detection. This behaviour is in line with other nation state operations, which tend to recycle and evolve code, the researchers said. According to the research report, Oceansalt was launched in five attack “waves” adapted to its targets. The first and second waves were spear phishing-based and began with a malicious Korean-language Microsoft Excel document created in May 2018 that acted as a downloader for the implant. The Excel document contained information that led McAfee researchers to believe targets were linked to South Korean public infrastructure projects. In all malicious documents, embedded macros were used to contact a download server and wire the Oceansalt implant to disk. Once connected, the implant was designed to send the IP address and computer name of the targeted machine, as well as the file path of the implant.



Audits: The Missing Layer in Cybersecurity

When organizations are astute enough to turn to their audit teams for cybersecurity support, auditors must be prepared to deliver value, aligned to the speed of their business. Just as the businesses that auditors support are rapidly transforming, the audit groups must follow suit. This can be challenging, considering many IT auditors received much of their professional training many years ago, when the word cybersecurity did not command the attention it does today, and before transformative technologies such as artificial intelligence, connected Internet of Things devices, and cloud-based platforms were so prevalent and impactful. Here's the good news: There are many more educational and training resources available today than 20 years ago, when I began in IT audit. Despite time and budget constraints, it is incumbent upon auditors to pursue the appropriate training and credentialing to transform their organizations, refresh their skill sets, and obtain the auditing cybersecurity acumen needed to become integral to their organization's cyber programs.


Best new Windows 10 security features: More patching, updating flexibility


Microsoft Windows 10 logo bandage data map
The Windows Defender Security Center has been renamed to merely Windows Security Center to better identify that it’s the main location for security information. Ransomware protection first introduced in 1709 has been simplified to make it easier to add blocked applications to the interface. Click “Allow an app” through “Controlled folder access.” After the prompt, click the + button and choose “Recently blocked apps” to find the application that has been blocked by the protection. You can then build in an exclusion and add them to the allowed list. Because time syncing is so key to both authentication as well as being a requirement for obtaining updates, the Windows Time service is now monitored for being in sync with the proper time. Should the system sense that the time sync service is disabled, you will get a prompt to turn the service back on. A new security providers section exposes all the antivirus, firewall and web protection software that is running on your system. In 1809, Windows 10 requires antivirus to run as a protected process to register.


Cloud Covered – Are You Insured?


We all know we need insurance, but what is the right-coverage for me? Well, it really depends on what are the type of assets you are trying to protect and how your business would be impacted if something happened. If we think about our daily lives, imagine having 20 doors/windows wide open and then just locking or adding video-surveillance to the one in the backyard (because your neighbor just told you he had been robbed the night before and that the thief broke into his house through the backyard door). Well, that’s a good start, however there are still more than 19 doors & windows still wide open and vulnerable for anybody to access right? Well, that’s pretty much what happens in IT and only securing a few “doors” is called “black-listing”. Let me explain: every server has 65535 ports open (for TCP and the same amount for UDP). If we consider the black-listing approach, we may just close a few ports based on common vulnerabilities knowledge. Most of the times, we don’t know which ports our apps need to work on, therefore we need to follow this approach and just block a few ports while permitting the rest of them.


Is Venture Capital investment in AI Realistic or Out of Control?

There are a few reasons why this investment might be rational. Just as the Internet and mobile revolutions in the past decades fueled trillions of dollars of investment and productivity growth, AI-related technologies are promising the same benefits. So this is all rational, if AI is the true transformative technology that it promises to be, then all these investments will pay off as companies and individuals change their buying behaviors, business processes, and ways of interacting. No doubt AI is already creating so-called “unicorn” startups with over $1 Billion in valuation. This could be justified if the AI-markets are worth trillions. So, what is this money being used for? If you ask the founders of many of these AI companies what their gigantic rounds will be used for you’ll hear things like geographic expansion, hiring, and expansion of their offerings, products, and services. As we’ve written about before, the difficulty in finding skilled AI talent is pushing salaries and bonuses to ridiculous heights.


20 innovative data centers that give us a glimpse into the future of computing


It is predicted that by 2025 data centers will consume one fifth of the Earth's total power. From cooling to lights to servers, there's no question that data centers eat up a lot of power. Recent news that climate change may be happening faster--and more severely--than initially believed makes traditional data center design, and its massive consumption of power, something that needs to be addressed. ... Project Natick is a Microsoft research endeavor that puts shipping container-sized pods filled to the brim with servers on the bottom of the ocean. The one active test machine currently in operation is just off the coast of Scotland, where Microsoft plans to leave it for up to five years for study. Project Natick servers require zero human interaction and are designed to remain in place for more than five years without the need for maintenance or repair. These servers can be powered by 100% renewable resources and emit zero emissions. According to Microsoft, "no waste products, whether due to the power generation, computers, or human maintainers are emitted into the environment."


Weighing the pros and cons of data security outsourcing

It’s nearly impossible to run a successful business operation in the current marketplace without taking IT seriously. The problem is that very few small and medium-sized businesses have the knowledge or skillset needed to properly manage each individual aspect of IT in-house. This is especially true when it comes to something like data security. One of the keys to running a successful business is being honest with yourself and recognizing what you don’t know. By identifying the areas where you come up short, you can take steps to compensate and overcome so that your business can thrive. One way you do this is through working with knowledgeable individuals that specialize in the areas where you’re deficient. Data security is a specific area where businesses often lack the internal knowledge and expertise to excel. It’s a particularly challenging aspect of IT that business leaders don’t have the time to master internally, so they go outside the company and outsource.


Review: Artificial Intelligence in 2018


Artificial Intelligence is not a buzzword anymore. As of 2018, it is a well-developed branch of Big Data analytics with multiple applications and active projects. Here is a brief review of the topic. AI is the umbrella term for various approaches to big data analysis, like machine learning models and deep learning networks. We have recently demystified the terms of AI, ML and DL and the differences between them, so feel free to check this up. In short, AI algorithms are various data science mathematical models that help improve the outcome of the certain process or automate some routine task However, the technology has now matured enough to move these data science advancements from the pilot projects phase to the stage of production-ready deployment at scale. Below is the overview of various aspects of AI technology adoption across the IT industry in 2018. ... AI algorithms have mostly surpassed the stage of pilot projects and are currently on various stages of company-wide adoption.


Why CIOs need to find ways to manage the old and the new


Research from Henley Business School and McKinsey shows that to be agile, businesses are choosing not to re-engineer legacy systems, said Manwani: “They either add another interface or do something totally separate.” But this is not a sustainable approach to managing digitisation initiatives, he said. “You can’t keep doing this. Without the engagement of an enterprise architect, businesses will reduce their agility in the medium term.” Just as restructuring of the IT department will never happen on its own, Manwani said: “You should not do major transformation piecemeal.” The enterprise architect's role is to present a coherent plan that can be used as a blueprint to underpin a digital transformation initiative, he added. “When we teach practitioners in the architecture space, it takes some time for them to absorb that they can, and should, engage in strategy development,” he said. “Preparing an architecture target state linked to the strategy is essential. This often requires new capabilities and mindsets.”


Should robots have rights?

California recently passed Senate Bill 1001, which bars companies and people from using bots that intentionally mislead those they are talking to into thinking they are human.  Putting aside the social and legal merits of this law, Bill 1001 implicates a hitherto-abstract, philosophical debate about when a simulation of intelligence crosses the line into sentience and becomes a true AI. Depending upon where you draw the line, this law is either discrimination against another form of sentient life or a timely remedy intended to protect users from exploitation by malevolent actors using human speech simulators. Alan Turing — the father of artificial intelligence but better known to the public for his role in breaking the German naval codes during World War II — foresaw the implications of his theories, which are still foundational to computer science, and was the first to enter this debate. He proposed his eponymous Turing test for artificial intelligence in 1950.



Quote for the day:


"A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves." -- Laotzu