Daily Tech Digest - June 30, 2019

How a quantum computer could break 2048-bit RSA encryption in 8 hours


Shor showed that a sufficiently powerful quantum computer could do this with ease, a result that sent shock waves through the security industry. And since then, quantum computers have been increasing in power. In 2012, physicists used a four-qubit quantum computer to factor 143. Then in 2014 they used a similar device to factor 56,153. It’s easy to imagine that at this rate of progress, quantum computers should soon be able to outperform the best classical ones. Not so. It turns out that quantum factoring is much harder in practice than might otherwise be expected. The reason is that noise becomes a significant problem for large quantum computers. And the best way currently to tackle noise is to use error-correcting codes that require significant extra qubits themselves. Taking this into account dramatically increases the resources required to factor 2048-bit numbers. In 2015, researchers estimated that a quantum computer would need a billion qubits to do the job reliably. That’s significantly more than the 70 qubits in today’s state-of-the-art quantum computers.



How Urbanhire is disrupting HR in Indonesia

Specifically, the hiring platform allows companies to post jobs across more than 50 portals, including Google, LinkedIn and Line - a freeware app which became Japan’s largest social network in 2013. Tapping into a pool of more than one million active jobseekers, the software-as-a-service (SaaS) follows a “data-driven hiring strategy”, aligning businesses to a four-step digital strategy of “source, assess, recruit and on-board”. Three years since launching, key customers include global brands such as AIA, Zurich and The Body Shop, in addition to Indonesian organisations like Danamon, Pertamina and Djarum. “Indonesia is a fantastic opportunity given where it is at from a growth perspective,” Kamstra added. “As a tech entrepreneur, I love the fact that we can use business models that have been successful in more developed countries without a lot of the baggage that comes with historical tech implementations that are no longer sufficient. “I love to use the telecom industry as an example. Indonesia was able to go from little infrastructure to a very modern one by not having gone through all the investment steps that countries like the US were forced to do as pioneers.


Don't Miss These 10 Cybersecurity Blind Spots

uncaptioned
When an employee is terminated, it’s important to shut down their access to all work-related accounts — immediately. Ideally, you might want to try to automate as much of the account-termination process as possible and ensure that the process covers all accounts for all employees. This can be easier said than done, but it's important to get a process or automated solution nailed down before that employee's access causes an unwanted breach. ... Any application that uses third-party software components, including open-source components, takes on the risk of potential vulnerabilities in those dependencies. These vulnerabilities should be identified, tracked and accounted for in the same way as every other software component. ... Service accounts are used by machines, and user accounts are used by humans. The trouble with service accounts is that sometimes they have access to a lot of different systems, and their passwords aren’t always managed well. Poorly managed passwords make for easy compromise by attackers.


Business needs to see infosec pros as trusted advisers

The first issue clouding communication between security professionals and the board or senior business leaders is the misunderstanding that IT risk is separate from business risk. Nothing could be further from the truth, especially considering that in most organisations today, the separation between what is IT and what is business is hard to identify because technology is the backbone of everything the business does. The second issue relates to how the message is packaged. Is the language full of technical jargon, or is it simple to understand and gets the message across in business terms? Does it highlight the loss to the business in terms understood by the board and senior business leaders? Take the example of when business downtime is required when a patch needs to be applied. Instead of talking in terms of the technical threats and the outcomes of poor patching, security professionals would be more effective explaining it in terms of loss to the business, such as lost opportunities or losses from an attack that may occur because of the unpatched status.


MongoDB CEO explains where the company has an edge over database giant Oracle


Cramer noted that Oracle, which has a nearly $195 billion market cap, has recently bought back billions in stock and has a big war chest. Despite that, MongoDB's architecture sets the younger company apart, Ittycheria said. The firm's database is built for the modern world, he added. "[Oracle] built an architecture designed in the late '70s for the world then, and they just tried to make it better over time," he said. "We built an architecture design for today's high performance mobile cloud computing world." Ittycheria explained how MongoDB helped Cisco address an order management application issue in which they receive tens of billions of orders from different sales channels a year. The platform serves more than 14,000 customers, including some of the most "sophisticated, demanding customers in the world." The list ranges from big media to telecom to gaming media to financial services, he said. Start-ups are also developing their business on MongoDB, Ittycheria said.


Fix your cloud security

Fix your cloud security
Enterprises are either not willing to use the right technology, or they don’t understand that the technology exists. It’s not that the database is unencrypted, it’s that nobody has any idea how to turn on encryption in flight or at rest. Also at fault are the “it was not on-premises” folks out there. They cling to the fact that since some security feature was not a part of the original on-premises system, it shouldn’t be needed in the cloud. The time to deal with security issues is when you move from on-premises to the public cloud. You need to spend at least a couple of weeks looking at identity access management, encryption, auditing, proactive security, and more, and then evaluating its viability to your enterprise. Otherwise, you could miss the cloud security boat as you make the migration.  In my opinion, this is the single most important step in migration. It allows you to reflect on what your security needs really are and how to solve them using cloud computing technology which, these days, is better than anything you can find on-premises.


Can Apple compete on privacy?


Apple's privacy campaign has already had an impact in terms of forcing the competition to pay closer attention to their disclosure and controls. It is unlikely to move the needle in terms of market share, but Apple can only gain as awareness of the great data tradeoff of targeted advertising grows and missteps in executing it continues. It should also be more effective as a retention tool for anyone who has not already been locked into Apple's milieu through its self-reinforcing portfolio of devices and growing family of services. Furthermore, while the smartphone market is mature, whatever challenges it as an emerging platform will likely raise even more profound privacy concerns. Already, wearables measure our pulse and assess whether we've fallen, and the kind of personal data that could be generated by measuring exactly what you're looking at via augmented reality gear could make smartphone-generated data seem crude by comparison. And there's another potential benefit to Apple's privacy campaign, one that the company has developed since it first stepped up its advocacy.


Serverless: applications only when you need them - no more, no less

Traditional IT architectures use a server infrastructure, whether on-premises or cloud-based, that requires managing the systems and services required for an application to function. The application must always be running, and the organization must spin up other instances of the application to handle more load which tends to be resource-intensive. Serverless architecture focuses instead on having the infrastructure provided by a third party, with the organization only providing the code for the applications broken down into functions that are hosted by the third party. This allows the application to scale based on function usage and is more cost-effective since the third-party charges for how often the application uses the function, instead of having the application running all the time. ... Serverless computing is constrained by performance requirements, resource limits, and security concerns, but excels at reducing costs for compute. That being said, where feasible, one should gradually migrate over to serverless infrastructure to make sure it can handle the application requirements before phasing out the legacy infrastructure.


Four Myths of Digital Transformation: What Only 8% of Companies Know


New research by Bain & Company finds that only 8% of global companies have been able to achieve their targeted business outcomes from their investments in digital technology. Said another way, more than 90% of companies are still struggling to deliver on the promise of a technology-enabled business model. What secret formula do the 8% deploy? Unsurprisingly, there are no shortcuts or silver bullet. But successful transformations do share some common themes. One of the most important is understanding that this is really a business transformation, supported by investments in new technology—not new technology in search of opportunities. Many executives pay lip service to this idea, but in practice, they delegate too much responsibility to the tech team, hoping the business can watch from the sidelines. At the 8%, executive teams understand that the core of a digital transformation is a business transformation, changing the way of engaging customers across channels, simplifying business processes, and redesigning products or services.


“We need to up our game”—DHS cybersecurity director on Iran and ransomware

Both the Iranian malicious activities and ransomware attacks are largely dependent on exploiting the same sorts of security issues. Both rely largely on the same tactics: malicious attachments, stolen credentials, or brute-force credential attacks to gain a foothold on targeted networks, usually using readily available malware as a foothold to use those credentials to then move across a network. When asked if the recent ransomware attacks on cities across the US (including three recent attacks in Florida with dramatically larger ransom demands) were indicative of a new, more targeted set of campaigns against US local governments, Krebs said that the attacks were likely not targeted—at least not initially. "I still think these [ransomware campaigns] are fairly expansive efforts, where [the attackers] are initially scanning, looking for certain vulnerabilities, and when they find one that's when they start to target," he said. "Again, I'm not sure we have the information right now saying they were specifically targeted.



Quote for the day:


"Leaders stuck in old cow paths are destined to repeat the same mistakes. Change leaders recognize the need to avoid old paths, old ideas and old plans." -- Reed Markham


Daily Tech Digest - June 29, 2019

India gears up for historic data protection law

India gears up for historic data protection law
India is getting ready for the law after the Narendra Modi government listed it as one of the bills in the Parliament last week, in the first session after the general elections. The election in April gave Modi a second term with a massive majority. The bill will create a legal regime for how data can be shared, stored and used in India. The proposed law, once passed by Parliament, will have major consequences for technology companies hoping to build businesses that access user data. Most technology companies like Google, Facebook and others now thrive on data generated by their users to earn billions of dollars worldwide. Mukesh Ambani who heads Reliance Industries, India’s biggest firm working in the energy, telecommunications and retail space, pointed out in January this year that “data is the new oil“. However, while the cabinet is yet to clear the final draft of the data protection bill, technology and privacy experts are concerned about the implications of the proposed new law. Many companies, including the technology giants hoping to tap into India’s massive markets, are apprehensive about what this entails for their core business models.



Hands on a screen for biometric identity access
“You have to be careful when there is sensitivity around personal data,” Kampman said. Whether it’s AI or any identity-related effort, “you need governance over this to be clear about what can be used and what can’t be used for a given purpose. You are a custodian of data and when you aggregate that data your responsibilities increase exponentially.” Broadly, the looking-before-leaping paradigm is in full force here. As government IT leaders and their business-line peers seek to better manage access and identity in an emerging cloud-driven enterprise, they’ll need to be thoughtful not just about the how, but about the why behind their efforts. “There needs to be a strategy,” Kampman said. “What is the outcome going to be? The technology world can solve these problems but it needs to be done with a viewpoint toward how it will appear to the end user. You want to have control over the technologies but you also want all the stakeholders to have an opportunity to contribute toward governance.”



Google has more deep data knowledge than any company in the world, and it is no slouch in the discipline of design. It’s only natural that the company would combine this expertise. Initially, the audience for the new data design guidelines was Google itself, but much as it did for Material Design, the company decided to publicize these best practices and encourage others to adopt them—anyone from app developers to everyday people who are left wondering why their PowerPoint chart sucks. “We started doing this internally as a way to guide [employees] through the do’s and don’ts of chart creation,” Lima tells Fast Company. “After conducting various research studies and partnering with teams across the company, the do’s and don’ts evolved into a set of high-level principles that were strongly rooted in Google-wide tenets crucial to the company’s growth, brand, and culture. These principles are meant to be generative and not prescriptive. We hope they can help any chart creator during ideation and evaluation.” The six principles read something like an introductory data design course.


Blockchain Technology: Enabling Enterprise Innovation

uncaptioned
The most satisfying finding from Deloitte is that business leaders are taking blockchain as seriously as we’d hoped. Deloitte found that “53 percent of respondents say that blockchain technology has become a critical priority for their organizations in 2019—a 10-point increase over last year.” That more than half of the respondents name blockchain technology as a critical priority is, in my eyes, the first tremor in what promises to be a substantial shake up of the business technology landscape. Accordingly, when the authors report that many leaders are focusing less on whether blockchain works (spoiler: it does) and more on what business models it might disrupt, they quote Deloitte Consulting LLP Principal Linda Pawczuk, Deloitte consulting leader for blockchain and cryptocurrency. She says, “We believe executives should no longer ask a single question about blockchain but, rather, a broad set of questions reflecting the role blockchain can play within their organization.”


Here Is A Look At Where Fintech Is Leading Us And Why


The business consultancy powerhouse reports that there is an abundance of fintech enterprises entering the market using novel business models and delivering fresh consumer offerings. Furthermore, says E&Y, the emerging fintech revolution is driving information sharing and the development of open-source Application Program Interfaces (APIs) as well as recent technological breakthroughs in artificial intelligence (AI) and biometrics. Around the world, lawmakers are following the example of Europe by promoting open access Application Programming Interfaces (APIs). By doing so, legislators desire to enhance consumer choice by increasing competition between banks and fintech enterprises. For new fintech firms, open-source APIs streamline the launch of new products and services and decrease costs customarily used for research and development. New fintech banks that build their organization around a digital business model represent the fastest growing segment of startups nurtured by this movement.


Image Classification Using Neural Networks in .NET

Image classification is one of the most common use cases for non-recurrent neural networks. The basic concept is that a neural network is given an input image, whose input layer has the same number of neurons as the pixels in the image (assuming the image is grayscale). Also depending on the number of classifications to be made available, this neural network should have the same number of output neurons. The neural network could use either convolutional, fully connected layers or a combination of both. Convolutional networks are faster as they squish the input image and convolute them using multiple kernels to extract important features. More details on convolution can be found here. Convolution greatly reduces the size of the fully connected networks which are used to classify the image after series of convolutions and pooling. As the neural network using appropriate activation functions can only have inputs and outputs as a double ranging from 0 to 1, to input an image to a neural network will require some pre-processing on the input end to normalize the pixels into this form.


Fortune 100 passwords corporate secrets left exposed on unsecured Amazon S3 server


Some of the world’s biggest companies have had 750GB worth of their innermost secrets revealed on unsecured Amazon S3 buckets, available for anybody to download – no password required. The startling revelation came from researchers at UpGuard, who discovered three publicly accessible Amazon S3 buckets related to Attunity, a leading provider of data integration and big data management software solutions, on May 13th 2019. The fact that Attunity is at the centre of the security breach is a concern, simply because of its impressive list of customers. On its website, the company boasts that it counts more than 2,000 enterprises and half the Fortune 100 in its customer base. According to screenshots published on UpGuard’s blog, Fortune 100 companies such as Netflix, Ford, and TD Bank were amongst those who had their data recklessly exposed. For instance, the researchers discovered files containing the usernames and passwords of Netflix database systems, and internal Ford presentations.



NotPetya Retrospective

Each of the companies impacted by NotPetya (and WannaCry before it) had some degree of security protection in place—the usual stuff like firewalls, antivirus, and patch management. That defense obviously wasn’t perfect or the attack would have been thwarted, but a perfect defense costs $∞ and is therefore impractical. As we deal with the realities of an imperfect defense, it becomes necessary to choose between preventative and reactive measures. Security expert Bruce Schneier makes the point on his resilience tag: ‘Sometimes it makes more sense to spend money on mitigation than it does to spend it on prevention.’ An investment in mitigation can also pay off in all kinds of ways that have nothing to do with attacks: that change that was just accidentally made to production when it should have been in test—fixed in seconds, by reverting to the last snapshot. NotPetya is unlikely to keep its ‘most devastating cyber attack’ title for long. There will be another attack, and we should expect it to be worse. Moving away from a trusted network model to a zero-trust model is the most effective way to defend against such attacks. But, effort should also focus on measures that allow speedy recovery.


Managing Machine Learning Models The Uber Way

Electric Vehicles
With access to the rich dataset coming from the cabs, drivers, and users, Uber has been investing in machine learning and artificial intelligence to enhance its business. Uber AI Labs consists of ML researchers and practitioners that translate the benefits of the state of the art machine learning techniques and advancements to Uber’s core business. From computer vision to conversational AI to sensing and perception, Uber has successfully infused ML and AI into its ride-sharing platform. Since 2017, Uber has been sharing the best practices of building, deploying, and managing machine learning models. Some of the internal tools and frameworks used at Uber are built on top of popular open source projects such as Spark, HDFS, Scikit-learn, NumPy, Pandas, TensorFlow and XGBoost. Let’s take a closer look at Uber’s projects in the ML domain. Michelangelo is a machine learning platform that standardized the workflows and tools across teams through an end-to-end system. It enabled developers and data scientists across the company to easily build and operate machine learning systems at scale.


Are You Choosing Fintech—or Is Fintech Choosing You?


The type of financial technology solutions that are best suited for any particular institution vary tremendously. Some institutions may be looking to digitize or modernize processes from within, others may be looking to add-on a single solution such as mobile payments. Fintech solutions could also involve data aggregation or lead generation activities as well as arrangements to buy assets, such as small business loans, from leading online lenders. Once the fintech solution is identified, each institution needs to identify the best strategy for itself to either compete or collaborate with emerging players—and capitalize on trends and capabilities to position itself with the most competitive advantage going forward. There are generally two broad strategies that financial institutions can pursue: invest in or build emerging technologies on your own, or buy, partner or network with fintech companies. ... Consider building in-house if there are sufficient internal resources, expertise and scale to innovate and customize unique capabilities. These strategies may be more appropriate for regional or larger banks than smaller community banks.



Quote for the day:


"There are some among the so-called elite who are overbearing and arrogant. I want to foster leaders, not elitists." - Daisaku Ikeda


Daily Tech Digest - June 28, 2019

MIT: We're building on Julia programming language to open up AI coding to novices

The system allows coders to create a program that, for example, can infer 3-D body poses and therefore simplify computer vision tasks for use in self-driving cars, gesture-based computing, and augmented reality. It combines graphics rendering, deep-learning, and types of probability simulations in a way that improves a probabilistic programming system that MIT developed in 2015 after being granted funds from a 2013 Defense Advanced Research Projects Agency (DARPA) AI program. The idea behind the DARPA program was to lower the barrier to building machine-learning algorithms for things like autonomous systems. "One motivation of this work is to make automated AI more accessible to people with less expertise in computer science or math," says lead author of the paper, Marco Cusumano-Towner, a PhD student in the Department of Electrical Engineering and Computer Science.  "We also want to increase productivity, which means making it easier for experts to rapidly iterate and prototype their AI systems."


Distributed Agile teams can help global enterprises reach their deployment and cost-reduction goals. Distributed teams reduce the overhead costs for an organization, and let it build a bigger pool of talented people than if the organization eschewed remote job candidates. In essence, location independence makes an organization much more agile and productive. However, these global teams must address some inherent collaboration challenges, such as differences in time zones, cultures and language barriers. For a distributed Agile team to succeed, each worker must make some extra effort. Project managers should strive to: arm the team with the right tools for communication and collaboration; understand personnel strengths and weaknesses; encourage transparency; hold regular meetings; set clear expectations for stakeholders and team members; adhere to engineering best practices and standards; focus on achievable milestones; and build awareness of different cultures. Let's look at some of the best practices that can help distributed Agile teams address these specific challenges.


Certain Insulin Pumps Recalled Due to Cybersecurity Issues

Certain Insulin Pumps Recalled Due to Cybersecurity Issues
In a statement, the FDA says it is warning patients and healthcare providers that certain Medtronic MiniMed insulin pumps have potential cybersecurity risks. "Patients with diabetes using these models should switch their insulin pump to models that are better equipped to protect against these potential risks," the FDA says. The potential risks are related to the wireless communication between Medtronic's MiniMed insulin pumps and other devices such as blood glucose meters, continuous glucose monitoring systems, the remote controller and CareLink USB device used with these pumps, the FDA warns. "The FDA is concerned that, due to cybersecurity vulnerabilities identified in the device, someone other than a patient, caregiver or healthcare provider could potentially connect wirelessly to a nearby MiniMed insulin pump and change the pump's settings. This could allow a person to over deliver insulin to a patient, leading to low blood sugar (hypoglycemia), or to stop insulin delivery, leading to high blood sugar and diabetic ketoacidosis (a buildup of acids in the blood)," the agency's statement says.


Determining data value by measuring return on effort

I'm always very encouraged when professionals from different architectural disciplines can converge on common ground. This can be rare event, so when it does happen, I like to call it out. Such an event has happened recently with a contact coming from the business architecture discipline, namely Robert DuWors, with us both trying to put some metrics around the measurement of data value in our respective areas of expertise. I believe that business architecture and information architecture are the two core pillars of the architecture of an enterprise. But practitioners of these interconnected disciplines can frequently rub badly against each other, each side devaluing the other's methods and approaches. So, to reach agreement across the two on what constitutes enterprise value of our efforts is a happy place to be. ...and together, we agreed that this represents the value of a specific item of data to the enterprise from both information and business perspectives. Now, of course, this may be refined over time, but it already contains most of the aspects that together, Robert and I believe are key to this metric... So, what does this equation gives us? It's in 3 major sections, which I will call ‘Horizons.’


SoftBank plans drone-delivered IoT and internet by 2023

SoftBank plans drone-delivered IoT and internet by 2023
Why the stratosphere? Well, one reason is that lower altitudes often have strong winds to deal with, including straight after storms. The companies say disaster communications could be a primary use case for the drones, and the stratosphere has a steady current. Also, because of the altitude, LTE and 5G coverage could be much more widespread than any alternative delivery mechanism implemented at a lower altitude. One High Altitude PlatformStation (HAPS), as the HAWK30’s genre of base stations are called, could provide service over about 125 miles in diameter, and about 40 HAPS could cover the entire Japanese archipelago. Whereas a set of older, tethered balloons (SoftBank developed one in 2011) might cover just six miles, SoftBank says. Others are aiming for the stratosphere, too. Newer balloons, such as Alphabet’s Loon, using tennis court-sized balloons also fly there. Softbank is a major provider of telecommunications in Japan, a country on the Pacific rim and prone to earthquakes. It is thus keen to find backup alternatives to wired, or even radio-based, ground assets that can get destroyed in natural disasters.


Five Facts on Fintech


From artificial intelligence to mobile applications, technology helps to increase your access to secure and efficient financial products and services. Since fintech offers the chance to boost economic growth and expand financial inclusion in all countries, the IMF and World Bank surveyed central banks, finance ministries, and other relevant agencies in 189 countries on a range of topics and received 96 responses. A new paper details the results of the survey alongside findings from other regional studies, and also identifies areas for international cooperation—including roles for the IMF and World Bank—and in which further work is needed by governments, international organizations, and standard-setting bodies. ... Awareness of cyber risks is high across countries and most jurisdictions have frameworks in place to protect financial systems. Most jurisdictions—79% of those with higher incomes according to the survey results—identified cyber risks in fintech as a problem for the financial sector.


The Windows 10 security guide: How to safeguard your business


Using the Windows Update for Business features built into Windows 10 Pro, Enterprise, and Education editions, you can defer installation of quality updates by up to 30 days. You can also delay feature updates by as much as two years, depending on the edition. Deferring quality updates by seven to 15 days is a low-risk way of avoiding the risk of a flawed update that can cause stability or compatibility problems. You can adjust Windows Update for Business settings on individual PCs using the controls in Settings > Update & Security > Advanced Options. In larger organizations, administrators can apply Windows Update for Business settings using Group Policy or mobile device management (MDM) software. You can also administer updates centrally by using a management tool such as System Center Configuration Manager or Windows Server Update Services. Finally, your software update strategy shouldn't stop at Windows itself. Make sure that updates for Windows applications, including Microsoft Office and Adobe applications, are installed automatically.


Use event processing to achieve microservices state management


Unfortunately, it's often confusing whether a process close to the event source actually maintains the state itself, or whether that state is somehow provided from outside the service. For this reason, it's essential to create unique transaction IDs for all state-dependent operations. You can use that ID to retrieve specific state information from a database or to drive transaction-specific orchestration. This ID also helps carry data from one phase of a transaction, or flow, to another. This setup is essentially a back-end approach to microservices state management. Front-end state management is another option. Control from the front end means that a user or edge process maintains the state data. This state information passes along through an event flow, and each successive process accumulates more information from the previous ones. Since this state information queues along with the events, you won't lose state data during a failure as long as the event stays intact. Also, when processes get scaled with more instances of the microservices involved, the event flow can provide the state data those processes need.


How to build disruptive strategic flywheels


Capabilities-driven strategy suggests that companies that have a clear way to play (WTP) that aligns with market demands, and that invest in a system of four to six differentiating capabilities that enable the company to excel at the WTP, are better positioned for success. But increasing clock speed changes the calculation. Today, the half-life of a competitive advantage may be fleeting. As industries are disrupted, players that have been successful within the context of one business cycle might need to rethink their differentiating capabilities, their investment portfolios, and possibly even their WTP more frequently and dynamically. Ford no longer just makes cars; it focuses instead on mobility solutions. Big oil companies are investing in renewable energy as a hedge against constraints on emissions. Amazon is competing with…everyone. As a result, it behooves organizations and managers to continually assess competitive moves, regulatory and technology evolution, and consumer preferences — and to adapt decisions in a dynamic fashion.


Smart Lock Turns Out to be Not So Smart, or Secure


Researchers are warning a keyless smart door lock made by U-tec, called Ultraloq, could allow attackers to track down where the device is being used and easily pick the lock – either virtually or physically. Ultraloq is a Bluetooth fingerprint and touchscreen door lock sold for about $200. It allows a user to use either fingerprints or a PIN for local access to a building. Ultraloq also has an app that can be used locally or remotely for access. When Pen Test Partners, with help from researchers identified as @evstykas and @cybergibbons, took a closer look they found Ultraloq was riddled with vulnerabilities. For starters, researchers found that the application programming interface (API) used by the mobile app leaked enough personal data from the user account to determine the physical address where the Ultraloq device was being used. ... API has no authentication at all,” researchers wrote. “The data is obfuscated by being base64 twice but decoding it exposes that the server side has no authentication or authorization logic. This leads to an attacker being able to get data and impersonate all the users,” researchers wrote.



Quote for the day:

"Humility is a great quality of leadership which derives respect and not just fear or hatred." -- Yousef Munayyer

Daily Tech Digest - June 27, 2019

Tracking down library injections on Linux

Tracking down library injections on Linux
The linux-vdso.so.1 file (which may have a different name on some systems) is one that the kernel automatically maps into the address space of every process. Its job is to find and locate other shared libraries that the process requires. One way that this library-loading mechanism is exploited is through the use of an environment variable called LD_PRELOAD. As Jaime Blasco explains in his research, "LD_PRELOAD is the easiest and most popular way to load a shared library in a process at startup. This environmental variable can be configured with a path to the shared library to be loaded before any other shared object." ... Note that the LD_PRELOAD environment variable is at times used legitimately. Various security monitoring tools, for example, could use it, as might developers while they are troubleshooting, debugging or doing performance analysis. However, its use is still quite uncommon and should be viewed with some suspicion. It's also worth noting that osquery can be used interactively or be run as a daemon (osqueryd) for scheduled queries. See the reference at the bottom of this post for more on this.



Responsible Data Management: Balancing Utility With Risks

To mitigate risks relating to data sharing, good protocols for information exchange need to be in place. Currently these exist bilaterally between certain organisations, but these should extend to apply multilaterally, to an entire sector or to an entire response to maximise impact. Another way to improve inter-agency data sharing is to use contemporary cryptographic solutions, which allows for data usage without giving up data governance. In other words, one organisation can run analyses on another organisation’s data and get aggregate outputs, without ever accessing the data directly.  There are a number of other data-management practices that can reduce the risks of the data falling into the wrong hands, such as ensuring that all computers in the field are password protected, and have firewalls and up-to-date antivirus software, operating systems and browsers. Additionally, the data files themselves should be encrypted. There are open-source programs that solve all of these tasks, so addressing them may be a matter of competence inside organisations rather than funding.


Insurer: Breach Undetected for Nine Years

Insurer: Breach Undetected for Nine Years
But despite the common challenges in detecting data breaches, the nine-year lag time at Dominion National is unusually high, some experts note. "Dominion National's notification of a breach nine years after the unauthorized access may be an unenviable record for detection," says Hewitt of CynergisTek. "This is unusual because it strongly suggests that they may not have been performing comprehensive security audits or performing system activity reviews." Tom Walsh, president of the consultancy tw-Security, notes: "I am surprised that they detected it dating that far back. Most organizations do not retain audit logs or event logs for that long. "Most disturbing is that an intruder or a malicious program or code could be into the systems and not previously detected. Nine years is beyond the normal refresh lifecycle for most servers. I would have thought that it could have been detected during an upgrade or a refresh of the hardware." Walsh adds that it is still unclear whether the incident is reportable under the HIPAA Breach Notification Rule. "They were careful in stating that there is no evidence to indicate that data was even accessed," he notes.


Going Beyond GDPR to Protect Customer Data

GDPR was something of a superstar in 2018. Searches on the regulation hit Beyoncé and Kardashian territory periodically throughout the year. In the United States, individual states began either exploring their own version of the GDPR or, in the case of California, enacting their own regulations. Other states that either enacted or strengthened existing data governance laws similar to the GDPR include Alabama, Arizona, Colorado, Iowa, Louisiana, Nebraska, Oregon, South Carolina, South Dakota, Vermont and Virginia. At this point, there is also a growing number of companies operating outside the EU that are ceasing operations with the EEA rather than taking on expensive changes to their business applications and practices and becoming subject to possible fines assessments. GDPR prosecutions continue, as do the filing of complaints and investigations. Each member country has its own listing of court cases in progress, so it’s a bit difficult to quantify just how many investigations and cases are active. 


Juniper’s Mist adds WiFi 6, AI-based cloud services to enterprise edge

wifi cloud wireless
“Mist's AI-driven Wi-Fi provides guest access, network management, policy applications and a virtual network assistant as well as analytics, IoT segmentation, and behavioral analysis at scale,” Gartner stated. “Mist offers a new and unique approach to high-accuracy location services through a cloud-based machine-learning engine that uses Wi-Fi and Bluetooth Low Energy (BLE)-based signals from its multielement directional-antenna access points. The same platform can be used for Real Time Location System (RTLS) usage scenarios, static or zonal applications, and engagement use cases like wayfinding and proximity notifications.” Juniper bought Mist in March for $405 million for this AI-based WIFI technology. For Juniper the Mist buy was significant as it had depended on agreements with partners such as Aerohive and Aruba to deliver wireless, according to Gartner.  Mist, too, has partners and recently announced joint product development with VMware that integrates Mist WLAN technology and VMware’s VeloCloud-based NSX SD-WAN. “Mist has focused on large enterprises and has won some very well known brands,” said Chris Depuy


DevSecOps Keys to Success

The most important elements of a successful DevSecOps implementation are automation and collaboration. 1) With DevSecOps, the goal is to embed security early on into every phase of the development/deployment lifecycle. By designing a strategy with automation in mind, security is no longer an afterthought; instead, it becomes part of the process from the beginning. This ensures security is ingrained at the speed and agility of DevOps without slowing business outcomes. 2) Similar to DevOps where there is close alignment between developers and technology operations engineers, collaboration is crucial in DevSecOps. Rather than considering security to be “someone else’s job,” developers, technology operations and security teams all work together on a common goal. By collaborating around shared goals, DevSecOps teams make informed decisions in a workflow where there is the biggest context around how changes will impact production and the least business impact to take corrective action.


Microsoft beefs up OneDrive security

Microsoft > OneDrive [Office 365]
The new feature - dubbed OneDrive Personal Vault - was trumpeted as a special protected partition of OneDrive where users could lock their "most sensitive and important files." They would access that area only after a second step of identity verification, ranging from a fingerprint or face scan to a self-made PIN, a one-time code texted to the user's smartphone or the use of the Microsoft Authenticator mobile app.  The idea behind OneDrive Personal Vault, said Seth Patton, general manager for Microsoft 365, is to create a failsafe so that "in the event that someone gains access to your account or your device," the files within the vault would remain sacrosanct. Access to the vault will also be on a timer, Patton said, that locks the partition after a user-set period of inactivity. Files opened from the vault will also close when the timer expires. As the feature's name implied, the vault is only for OneDrive Personal, the consumer-grade storage service, not for the OneDrive for Business available to commercial customers. Although OneDrive Personal is a free service - albeit with a puny 5GB of storage - many come to it from the Office 365 subscription service.


Disaster planning: How to expect the unexpected


Larger organisations will typically have the advantages of a greater resource reserve and multiple premises in different regions, but they can be slow to react and their communications can struggle. Smaller organisations can be much more adaptable and swifter to react, but rarely have resources in reserve and are usually based in one fixed location. As with all things, preparation is key, and therefore it is worth taking time to prepare business continuity strategies and disaster plans. Rather than being scenario-specific – having dedicated plans for different eventualities – organisations should take an agnostic approach with their business continuity strategies. “If your business recovery plan is written strictly for recovery, regardless of scenario, then you will be in the best shape, as you will know that whatever happens, the plan has been tested,” says Dan Johnson, director of global business continuity and disaster recovery at Ensono. “You will go to your backup procedures that keep your daily processes moving and make sure business is flowing.”


An IoT maturity model and tips for IoT deployment success


Nemertes compiled these results into an IoT maturity model that companies can use as their roadmap to success (see figure). The maturity model comprises four levels: unprepared -- the organization lacks tools and processes to address the IoT initiative; reactive -- the organization has platforms and processes to respond to but not proactively address the issue; proactive -- the organization has the tools and processes to proactively deliver on the issue as it is currently understood; and anticipatory -- the organization has the tools, processes and people to handle future requirements. The third of organizations in the survey with successful IoT deployments were likely to be at Level 2 or Level 3 IoT maturity, and in a couple of areas -- namely executive sponsorship, budgeting and architecture -- successful organizations far outshone organizations that were less successful. ... In addition, key IoT team members include the IoT architect, IoT security architect, IoT network architect and IoT integration architect. Most large organizations have different architects that encompass these responsibilities, though their job titles may not reflect it.


AI presents host of ethical challenges for healthcare

With respect to ethics, she observed that the massive volumes of health data being leveraged by AI must be carefully protected to preserve privacy. “The sheer volume, variability and sensitive nature of the personal data being collected require newer, extensive, secure and sustainable computational infrastructure and algorithms,” according to Tourassi’s testimony. She also told lawmakers that data ownership and use when it comes to AI continues to be a sensitive issue that must be addressed. “The line between research use and commercial use is blurry,” said Tourassi. To maintain a strong ethical AI framework, Tourassi believes fundamental questions need to be answered such as: Who owns the intellectual property of data-driven AI algorithms in healthcare? The patient or the medical center collecting the data by providing the health services? Or the AI developer? “We need a federally coordinated conversation involving not only the STEM sciences but also social sciences, economics, law, public policy and patient advocacy stakeholders” to “address the emerging domain-specific complexities of AI use,” she added



Quote for the day:


"Nobody in your organization will be able to sustain a level of motivation higher than you have as their leader." -- Danny Cox


Daily Tech Digest - June 26, 2019

MongoDB CEO tells hard truths about commercial open source

opensourceistock-485587762boygovideo.jpg
As ugly as that sentiment may seem, it's (mostly) true. Not completely, because MongoDB has had some external contributions. For example, Justin Dearing responded to Ittycheria's claim thus: "As someone that has made a (very tiny) contribution to the [MongoDB] server source code, this is kind of insulting to hear [it] said this way." There's also the inconvenient truth that part of MongoDB's popularity has been the broad array of drivers available. While the company writes the primary drivers used with MongoDB, the company relies on third-party developers to pick up the slack on lesser-used drivers. Those drivers, though less used, still contribute to the overall value of MongoDB. But it's largely true, all the same. And it's probably even more true of all the other open source companies that have been lining up to complain about public clouds like AWS "stealing" their code. None of these companies is looking for code contributions. Not really. When AWS, for example, has tried to commit code, they've been rebuffed.



Sen. Wyden Asks NIST to Develop Secure File Sharing Standards
Wyden also recommends implementing new technology and better training for government workers to help ensure that sensitive documents can be sent securely with better encryption. "Many people incorrectly believe that password-protected .zip files can protect sensitive data," Wyden writes in the letter. "Indeed, many password-protected .zip files can be easily broken with off-the-shelf hacking tool. This is because many of the software programs that create .zip files use a weak encryption algorithm by default. While secure methods to protect and share data exist and are feely available, many people do not know which software they should use." Wyden notes that the increasing number of data breaches, as well as nation-state attacks, point to the need to develop new standards, protocols and guidelines to ensure that sensitive files are encrypted and can be securely shared. He also asked NIST to develop easy-to-use instructions so that the public to take advantage of newer technologies. A spokesperson for NIST tells Information Security Media Group that the agency is reviewing Wyden's letter and will provide a response to the senator's concerns and questions.


Q&A on the Book Empathy at Work


Emotional empathy is inherent in us; when we see someone laughing, we smile. When we see someone crying, we feel sad. Cognitive empathy is understanding what a person is thinking or feeling; this one is often referred to as “perspective taking” because we are actively engaged in attempting to “get” where the person is coming from. Empathic concern is being so moved by what another person is going through that we are empowered to act. The majority of the time when people are talking about empathy, they are referring to empathic concern.  These definitions of empathy are all accurate and informative. But a big point that I always try to make is that empathy is a verb; it’s a muscle that must be worked consistently for any real change to occur. That’s why everyone’s definition of what empathy is in their own lives is going to be a little bit different. We all feel understand in a different way, so each person truly has to define what empathy looks and feels like for themselves. For me, it’s allowing me to finish my thoughts. I’m a stutterer, and it sometimes takes me a bit to get a word or a thought out.


A Developer's Journey into AI and Machine Learning

There are challenges. Microsoft really has to sell developers and data engineers that data science, AI and ML is not some big, scary, hyper-nerd technology. There are corners where that is true, but this field is open to anyone who's willing to learn. And Microsoft is certainly doing its part with streamlined services and tools like Cognitive Services and ML.NET. End of the day, anyone who is already a developer/engineer clearly has the core capabilities to be successful in this field. All people need to do is level up some of their skills and add new ones. In some cases, people will need to unlearn what they've already learned, particularly around the field of certainty. The way I like to put it, a DBA (database admin) will always give a precise answer. Inaccurate maybe, but never imprecise. "There are 864,782 records in table X," for example. But a data science/ML/AI practitioner deals with probabilities. "There's a 86.568% chance there's a cat in this picture." It's a change of mindset as much as a change in technologies.


Break free from traditional network security


While it can be argued that perimeterless network security will become essential to keep the wheels of commerce turning, Simon Persin, director at Turnkey Consulting, says: “A lack of network perimeters needs to be matched with technology that can prevent damage.” In a perimeterless network architecture, the design and behaviour of the network infrastructure should aim to prevent IT assets being exposed to threats such as rogue code. Persin says that by understanding which protocols are allowed to run on the network, an SDN can allow people to perform the legitimate tasks required by their role. Within a network architecture, a software-defined network separates the forwarding and control planes. Paddy Francis, chief technology officer for Airbus CyberSecurity, says this means routers essentially become basic switches, forwarding network traffic in accordance with rules defined by a central controller. What this means from a monitoring perspective, says Francis, is that packet-by-packet statistics can be sent back to the central controller from each forwarding element.


The Unreasonable Effectiveness of Software Analytics

Software analytics distills large amounts of low-value data into small chunks of very-high-value data. Such chunks are often predictive; that is, they can offer a somewhat accurate prediction about some quality attribute of future projects—for example, the location of potential defects or the development cost. In theory, software analytics shouldn’t work because software project behavior shouldn’t be predictable. Consider the wide, ever-changing range of tasks being implemented by software and the diverse, continually evolving tools used for software’s construction (for example, IDEs and version control tools). Let’s make that worse. Now consider the constantly changing platforms on which the software executes (desktops, laptops, mobile devices, RESTful services, and so on) or the system developers’ varying skills and experience. Given all that complex and continual variability, every software project could be unique. And, if that were true, any lesson learned from past projects would have limited applicability for future projects.


Robots can now decode the cryptic language of central bankers


But robots aren’t that smart yet, according to Dirk Schumacher, a Frankfurt-based economist at French lender Natixis SA, which this month started publishing an automated sentiment index of European Central Bank meeting statements. “The question is how intelligent it can become,” he said. “Maybe in a few years time we’ll have algorithms which get everything right, but at this stage I find it a nice crosscheck to verify one’s own assessments.” The main edge humans still have over machines is being able to read and understand ambiguity, Schumacher said. While Natixis’ system can quantify how optimistic or pessimistic ECB policy makers are looking at word choice and intensity, it can’t discern if a policy maker said something ironic — although arguably not all humans could either. “It’s not a perfect science and it’s hard to see that humans will be replaced by these methods anytime soon,” said Elisabetta Basilico, an investment adviser who writes about quantitative finance. Prattle, which was recently acquired by Liquidnet, claims its software accurately predicts G10 interest rate moves 9.7 times out of 10.


Error-Resilient Server Ecosystems for Edge and Cloud Datacenters

Realizing our proposed errorresilient, energy-efficient ecosystem faces many challenges, in part because it requires the design of new technologies and the adoption of a system operation philosophy that departs from the current pessimistic one. The UniServer Consortium (www .uniserver2020.eu)—consisting of academic institutions and leading companies such as AppliedMicro Circuits, ARM, and IBM—is working toward such a vision. Its goal is the development of a universal system architecture and software ecosystem for servers used for cloud- and edge-based datacenters. The European Community’s Horizon 2020 research program is funding UniServer (grant no. 688540). The consortium is already implementing our proposed ecosystem in a state-of-the art X-Gene2 eight-core, ARMv8-based microserver with 28- nm feature sizes. The initial characterization of the server’s processing cores shows that there is a significant safety margin in the supply voltage used to operate each core. Results show that the some cores could use 10 percent below the nominal supply voltage that the manufacturer advises. This could lead to a 38 percent power savings.


What is edge computing, and how can you get started?


Edge computing architecture is a modernized version of data center and cloud architectures with the enhanced efficiency of having applications and data closer to sources, according to Andrew Froehlich, president of West Gate Networks. Edge computing also seeks to eliminate bandwidth and throughput issues caused by the distance between users and applications. Edge computing is not the same as the network edge, which is more similar to a town line. A network edge is one or more boundaries within a network to divide the enterprise-owned and third-party-operated parts of the network, Froehlich said. This distinction enables IT teams to designate control of network equipment. Edge computing's ability to bring compute and storage data in or near enterprise branches is attractive to those who require quick response times and support for large data amounts. Edge computing can bring several benefits to enterprise networks with centralized management, lights-out operations and cloud-style infrastructure, according to John Burke


Cloudflare Criticizes Verizon Over Internet Outage


Cloudflare put the blame squarely on Verizon for not adequately filtering erroneous routes announced by an ISP, DQE Communications, in Pennsylvania. It pulled no punches, saying there was no good reason for Verizon's failure other than "sloppiness or laziness." "The leak should have stopped at Verizon," writes Tom Strickx, a Cloudflare network software engineer, in the blog post. "However, against numerous best practices outlined below, Verizon's lack of filtering turned this into a major incident that affected many Internet services such as Amazon, Linode and Cloudflare." DQE used a BGP optimizer, which allows for more specific BGP routes, Strickx writes. Those more specific routes trump more general ones in announcements. DQE announced the routes to one of its customers, Allegheny Technologies, a metals manufacturing company. Then, those routes went to Verizon. To be fair, the ultimate responsibility falls on DQE for announcing the wrong routes. Allegheny is somewhat to blame for pushing those routes on. But then Verizon - one of the largest transit providers in the world - propagated the routes.



Quote for the day:


"Defeat is not the worst of failures. Not to have tried is the true failure." -- George Woodberry


Daily Tech Digest - June 25, 2019

AI in IoT elevates data analysis to the next level


In a typical enterprise network, IoT exists beyond the boundaries of the cloud, passing data back through a firewall, where it then takes residence in storage and is made available to some process or application. But with so many different devices reporting -- a number that will steadily increase -- traffic problems are inevitable. Managing the flow of so much data from so many endpoints is beyond the resources of most companies. But wait; it gets worse. Many IoT applications are two-way streets, where data gathered by sensors has consequences in the locations they're reporting on; for example, adjusting the power consumption of a building based on changes in occupancy and weather. In many such cases, there's no time for data to make a round trip to a cloud. ... The fix for these problems is edge computing -- extending the processing power of an enterprise network by adding gateways and IoT devices that offer local processing power.


.NET Core: Past, Present, and Future


The highlight of the .NET Core 3.0 announcement was the support for Windows desktop applications, focused on Windows Forms, Windows Presentation Framework (WPF), and UWP XAML. At the moment of the announcement, the .NET Standard was shown as a common basis for Windows Desktop Apps and .NET Core. Also, .NET Core was pictured as part of a composition containing ASP.NET Core, Entity Framework Core, and ML.NET. Support for developing and porting Windows desktop applications to .NET Core would be provided by "Windows Desktop Packs", additional components for compatible Windows platforms. ... Microsoft shows .NET 5 as a unifying platform for desktop, Web, cloud, mobile, gaming, IoT, and AI applications. It also shows explicit integration with all Visual Studio editions and with the command line interface (CLI). The goal of the new .NET version is to produce a single .NET runtime and framework, cross-platform, integrating the best features of .NET Core, .NET Framework, Xamarin, and Mono. 


Google’s Hangouts Chat gets chatbot boost with Dialogflow

chatbot catalog
By bringing Dialogflow to Hangouts Chat, Google wants to simplify the process of creating natural language bots users can interact with.  “With Dialogflow, you can create a natural-sounding conversational UI with just a few clicks,” said Jon Harmer, product manager, Google Cloud, in a blog post. “Because Dialogflow includes built-in Natural Language Understanding (NLU), your bot can quickly understand and respond to user messages.” Developers can make their Dialogflow bots available for use in Google’s team collaboration app via the Hangouts Chat Integrations page, where they can install a bot on their own account to test in the application.  In addition, a new Hubot adapter has been introduced, allowing developers to bring Hubot bots into Hangouts Chat. A chatbot catalog is also on its way to improve discoverability as the number of bots grows. That catalog will be available in the “coming months,” Google said.  "Google continues to aggressively move to enable intelligent chatbots and natural voice capabilities to add value and remove mundane steps in communications and collaboration,” said Wayne Kurtzman


U.S. adds Chinese technology companies to export blacklist

Among those added to the blacklist were AMD’s Chinese joint-venture partner Higon, Commerce said in the statement. Also included were Sugon, which Commerce identified as Higon’s majority owner, along with Chengdu Haiguang Integrated Circuit and Chengdu Haiguang Microelectronics Technology, both of which the department said Higon had an ownership interest in. The ban affects AMD’s Chinese joint venture THATIC, which was established in 2016. AMD uses THATIC to license its microprocessor technology to Chinese companies including Higon. THATIC, or Tianjin Haiguang Advanced Technology Investment Co., is a Chinese holding company comprising an AMD joint venture with two entities, according to an AMD regulatory filing. THATIC provides chips to Sugon, a Chinese server and computer maker. Lisa Su, AMD’s chief executive officer, said at a recent conference in Taiwan that AMD would not license its newer technologies to Chinese companies.


There are multiple approaches to zero trust, but the main ones are focused on identity, gateway and the device. However, as the tide of mobile and cloud continues to intensify, the limitations of gateway and identity-centric approaches become more apparent. For instance: Identity-centric approaches - provide limited visibility on device, app and threats, while also still relying on passwords; one of the main causes of a data breach; and Gateway-centric approach - limited visibility on device, apps and threats, while also assuming that all enterprise traffic goes through the enterprise network when in reality 25 per cent of enterprise traffic doesn’t go through their network. Only a mobile-centric zero trust approach addresses the security challenges of the perimeter-less modern enterprise while allowing the agility and anytime access that business needs. Mobile-centric zero trust seeks to verify more attributes than both these approaches before granting access. It validates the device, establishes user context, checks app authorisation, verifies the network, and detects and remediates threats before granting secure access to any device or user.


7 steps to enhance IoT security

iot internet of things chains security by mf3d getty
Controlling access within an IoT environment is one of the bigger security challenges companies face when connecting assets, products and devices. That includes controlling network access for the connected objects themselves. Organizations should first identify the behaviors and activities that are deemed acceptable by connected things within the IoT environment, and then put in place controls that account for this but at the same time don’t hinder processes, says John Pironti, president of consulting firm IP Architects and an expert on IoT security. “Instead of using a separate VLAN [virtual LAN] or network segment which can be restrictive and debilitating for IoT devices, implement context-aware access controls throughout your network to allow appropriate actions and behaviors, not just at the connection level but also at the command and data transfer levels,” Pironti says. This will ensure that devices can operate as planned while also limiting their ability to conduct malicious or unauthorized activities, Pironti says. “This process can also establish a baseline of expected behavior that can then be logged and monitored to identify anomalies or activities that fall outside of expected behaviors at acceptable thresholds,” he says.


Introduction to ELENA Programming Language

ELENA is a general-purpose, object-oriented, polymorphic language with late binding. It features message dispatching/manipulation, dynamic object mutation, a script engine / interpreter and group object support. ... There is an important distinction between "methods" and "messages". A method is a body of code while a message is something that is sent. A method is similar to a function. in this analogy, sending a message is similar to calling a function. An expression which invokes a method is called a "message sending expression". ELENA terminology makes a clear distinction between "message" and "method". A message-sending expression will send a message to the object. How the object responds to the message depends on the class of the object. Objects of differents classes will respond to the same message differently, since they will invoke different methods. Generic methods may accept any message with the specified signature (parameter types).


Cyber attackers using wider range of threats


“The key findings illustrate the importance of layered security protections,” said Corey Nachreiner, chief technology officer at WatchGuard Technologies. “Whether it be DNS-level filtering to block connections to malicious websites and phishing attempts, intrusion prevention services to ward off web application attacks, or multifactor authentication to prevent attacks using compromised credentials – it’s clear that modern cyber criminals are using a bevy of diverse attack methods. “The best way for organisations to protect themselves is with a unified security platform that offers a comprehensive range of security services,” he said. Another key finding of the report is that Mac OS malware on the rise. Mac malware first appeared on WatchGuard’s top 10 malware list in the third quarter of 2018, and now two variants have become prevalent enough to make the list in the first quarter of 2019, the report said. It added that this increase in Mac-based malware further debunks the myth that Macs are immune to viruses and malware and reinforces the importance of threat protection for all devices and systems.


Google Cloud Scheduler is Now Generally Available


Users can schedule a job in Cloud Scheduler by using its UI, or the CLI or API to invoke an HTTP/S endpoint, Cloud Pub/Sub topic or App Engine application. When a job starts, it will send Cloud Pub/Sub message or an HTTP request to a specified target destination on a recurring schedule. Subsequently, the target handler will execute the job and return a response of the outcome – either: A success code (2xx for HTTP/App Engine and 0 for Pub/Sub) when it succeeds; Or an error, resulting in the Cloud Scheduler retrying the job until it reaches the maximum number of attempts. Furthermore, once user schedules a job, they can monitor this in the Cloud Scheduler UI and check the status. Google Cloud Scheduler is not the only managed cron service available in the public cloud. Competitors Microsoft and Amazon already offered a scheduler service that has been available for quite some time. Microsoft offers the Azure Scheduler service, which became generally available in late 2015 and will be replaced by Azure Logic Apps Service, where developers can use the scheduler connector. Also, Logic Apps offers additional capabilities for application and process integration, data integration and B2B communication. Furthermore, AWS released the Batch service with similar capabilities to Scheduler in late 2016.


4 steps to developing responsible AI

A customer takes a picture as robotic arms collect pre-packaged dishes from a cold storage, done according to the diners' orders, at Haidilao's new artificial intelligence hotpot restaurant in Beijing, China, November 14, 2018. Picture taken November 14, 2018. REUTERS/Jason Lee - RC13639C1D90
As AI capabilities race ahead, government leaders, business leaders, academics and many others are more interested than ever in the ethics of AI as a practical matter, underlining the importance of having a strong ethical framework surrounding its use. But few really have the answer to developing ethical and responsible AI. Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence. It is imperative for business leaders to understand AI and make a top-down commitment to the responsible use of AI. Central to this is taking a human-centric approach to AI thinking and development. It is not enough to have the correct data, or an algorithm that performs accurately. It is critical to incorporate systems of governance, design and training that provide a framework for successfully implementing AI in an organization. A strong Responsible AI framework entails mitigating the risks of AI with imperatives that address four key areas



Quote for the day:


"Leaders keep their eyes on the horizon, not just on the bottom line." -- Warren G. Bennis