Daily Tech Digest - April 07, 2019

Can you teach humor to an AI?


“Artificial intelligence will never get jokes like humans do,” states Kiki Hempelmann, a computational linguist who studies humor at Texas A&M University-Commerce. “In themselves, they have no need for humor. They miss completely context.” he adds. “Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp,” Tristan Miller, computer scientist and linguist at Darmstadt University of Technology Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany elaborates on the complexity for machines to process context: “Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp,”. Miller has analyzed more than 10,000 word plays and found it quite challenging. “It’s because it relies so much on real-world knowledge — background knowledge and commonsense knowledge. A computer doesn’t have these real-world experiences to draw on. It only knows what you tell it and what it draws from.” he concludes.



Security flaws in banking apps expose data and source code


Exposed source code, sensitive data, access to backend services via APIs and more have been uncovered after a researcher downloaded various financial apps from the Google Play store and found that it took, on average, just eight and a half minutes before they were reading the code. Vulnerabilities including lack of binary protections, insecure data storage, unintended data leakage, weak encryption and more were found in banking, credit card and mobile payments apps and are detailed a report by cybersecurity company Arxan: In Plain Sight: The Vulnerability Epidemic in Financial Mobile Apps. "There's clearly a systemic issue here – it's not just one company, it's 30 companies and it's across multiple financial services verticals," Alissa Knight, cybersecurity analyst at global research and advisory firm Aite Group and the researcher behind the study, told ZDNet. The vast majority – 97 percent of the apps tested – were found to lack binary code protections, making it possible to reverse engineer or decompile the apps exposing source code to analysis and tampering.


Why blockchain (might be) coming to an IoT implementation near you

Chains of binary data.
Blockchain technology can be counter-intuitive to understand at a basic level, but it’s probably best thought of as a sort of distributed ledger keeping track of various transactions. Every “block” on the chain contains transactional records or other data to be secured against tampering, and is linked to the previous one by a cryptographic hash, which means that any tampering with the block will invalidate that connection. The nodes - which can be largely anything with a CPU in it - communicate via a decentralized, peer-to-peer network to share data and ensure the validity of the data in the chain. The system works because all the blocks have to agree with each other on the specifics of the data that they’re safeguarding, according to Nir Kshetri, a professor of management at the University of North Carolina. If someone attempts to alter a previous transaction on a given node, the rest of the data on the network pushes back. “The old record of the data is still there,” said Kshetri. That’s a powerful security technique - absent a bad actor successfully controlling all of the nodes on a given blockchain, the data protected by that blockchain can’t be falsified or otherwise fiddled with.


Researchers developed algorithms that mimic the human brain (and the results don’t suck)

Researchers developed algorithms that mimic the human brain (and the results donĂ¢€™t suck)
Krotov and Hopfield’s work maintains the simplicity of the old school studies, but represents a novel step forward in brain-emulating neural networks. TNW spoke with Krotov who told us: If we talk about real neurobiology, there are many important details of how it works: complicated biophysical mechanisms of neurotransmitter dynamics at synaptic junctions, existence of more than one type of cells, details of spiking activities of those cells, etc. In our work, we ignore most of these details. Instead, we adopt one principle that is known to exist in the biological neural networks: the idea of locality. Neurons interact with each other only in pairs. In other words, our model is not an implementation of real biology, and in fact it is very far from the real biology, but rather it is a mathematical abstraction of biology to a single mathematical concept – locality. Modern deep learning methods often rely on a training technique called backpropagation, something that simply wouldn’t work in the human brain because it relies on non-local data.


Self-Service Delivery


Self-Service Delivery is an approach that makes the tools necessary to develop and deliver applications available via self-service. It makes the actions we need to take as developers — starting, developing and shipping software — available as user-accessible tools, so that we can work at our own speed without getting blocked. By making actions automated and accessible, it's easier to standardize configurations and practices across teams. We need specific building blocks to enable Self-Service Delivery. The same principle at the heart of your favorite framework applies to delivery. If we think of delivery phases in framework terms, each phase has a default implementation, which can be overridden. For example, if the convention is that Node projects in my team are built by running npm test, then I include a test script in my project. I don't write the code that runs the script, nor tell my build tool explicitly to do so. The same is true for other phases of delivery.


Artificial intelligence can now emulate human behaviors – soon it will be dangerously good

Robot
At the moment, there are enough potential errors in these technologies to give people a chance of detecting digital fabrications. Google's Bach composer made some mistakes an expert could detect. For example, when I tried it, the program allowed me to enter parallel fifths, a music interval that Bach studiously avoided. The app also broke musical rules of counterpoint by harmonizing melodies in the wrong key. Similarly, OpenAI's text-generating program occasionally wrote phrases like "fires happening under water" that made no sense in their contexts. As developers work on their creations, these mistakes will become rarer. Effectively, AI technologies will evolve and learn. The improved performance has the potential to bring many social benefits – including better health care, as AI programs help democratize the practice of medicine. Giving researchers and companies freedom to explore, in order to seek these positive achievements from AI systems, means opening up the risk of developing more advanced ways to create deception and other social problems. Severely limiting AI research could curb that progress.


The Race For Data And The Cybersecurity Challenges This Creates

uncaptioned image
High-tech today needs to be doing the exact same thing, with an emphasis on cybersecurity problems. Rather than sending devices and apps into the connected ecosystem willy-nilly, we need to fully understand what could happen when we do. How many people could be impacted? How many companies? What are the financial losses that could be sustained? What about losses to brand/image? In other words: do we really understand the implications of what we are creating here? These questions, if well researched, should be enough to slow down time-to-market and eventually stop breaking so many things. This should be performed both at the development stage in every company and the adoption stage. Companies creating products have a responsibility to their customers to ensure safety and they can’t do that if they don’t fully take everything into account. On the other end of the spectrum, CIOs, CTOs, and anyone responsible for buying and adopting new tech in your business needs to perform the same sort of analysis. Don’t just buy tech for tech’s sake.


Serverless computing growth softens, at least for now

Plans or intentions for serverless implementations have slipped as well, the Cloud Foundry survey also shows. Currently, 36 percent report evaluating serverless, compared to 42 percent in the previous survey.  Some of this may be attributable to the statistical aberrations that occur within surveys that are conducted within months of one another -- don't be surprised if the numbers pop again in the fall survey. Diving deeper into the adoption and planned adoption numbers, the survey's authors point out that within organizations embracing serverless architecture, usage is actually proliferating. For users and evaluators, 18 percent say they are broadly deploying serverless across their entire company, double the percentage (9 percent) who said that only one year ago.  Still, it is telling that there is some degree of caution being exercised when moving to serverless architecture. What's behind the caution?


Vulnerability Management: 3 Questions That Will Help Prioritize Patching


There is usually a significant delta between intended network segmentation and access rights, and what actually exists. Credentials and connections that introduce risk get set up in a variety of ways. We call this actual connectivity the “access footprint.” Throughout the normal work day, users connect and disconnect from various systems and applications, leaving behind cached credentials and potential “live” connections. The access footprint changes constantly. Some risky conditions are fleeting; others can persist for a very long time. But even if these conditions are short-lived, an attacker situated in the right place at the right time (“right” for them, wrong for you!) has plenty to work with. A new report published by CrowdStrike underscores the importance of proactively hardening the network against lateral movement. It’s a vitally important complement to traditional vulnerability management.


The Difference Between Microservices and Web Services


Microservices architecture involves breaking down a software application into its smaller components, rather than just having one large software application. Typically, this involves splitting up a software application into smaller, distinct business capabilities. These can then talk to each other via an interface. ... So, if microservices are like mini-applications that can talk to each other, then what are web services? Well, they are also mini-applications that can talk to each other, but over a network, in a defined format. They allow one piece of software to get input from another piece of software, or provide output, over a network. This is performed via a defined interface and language, such as XML. If you’re running on a network where your software components or services won’t be co-located, or you want the option of running them in separate locations in the future then you will likely need to use web services in some form.



Quote for the day:


"No amount of learning can cure stupidity and formal education positively fortifies it." -- Stephen Vizinczey


Daily Tech Digest - April 06, 2019

Artificial intelligence, machine learning and intelligence


Besides processing information in the “classic” way, quantum computers use two specific characteristics of the quantum system, i.e. overlapping – where two or more quantum states can be added together – and entanglement that implies, in a counter-intuitive way, the presence of many remote correlations among all the physical quantum states examined. Hence an availability of data and calculation speeds, enabling to carry out previously unimaginable operations: the analysis of continental climate change; the world economic cycles of raw materials; the number and physical constants of galaxies in space. In the future, there will also be convergence between AI and the Internet of Things, which will make both the construction of vehicles and their driving autonomous. Another short-term integration will be between blockchain technology and Artificial Intelligence. We have often spoken about blockchain, but in this case it is above all the integration between the blockchain “closed” network and a selective data collection or, otherwise, a patented and still secret technology.



Continuous Delivery Foundation seeks smoother CI/CD paths


One reason is enterprises must pick from a large menu of often fragmented tools in the CI/CD market, and then integrate the various tools into their CI/CD pipelines. Among the many tools in the CI/CD landscape are Shippable, CloudBees Jenkins, Atlassian's Bamboo, Bitnami, CircleCI, Travis CI, JetBrains' TeamCity and Microsoft's Azure DevOps Server. Nearly every company also creates software to automate its business processes, so CI/CD tools are in higher demand than ever. Despite some consolidation in the DevOps arena -- JFrog recently acquired Shippable, and CloudBees snapped up Codeship -- enterprises do face a choice: They must integrate several different tools to build their pipelines, or lock into an end-to-end DevOps tools environment with one of the major cloud providers. To help simplify the process, the Linux Foundation formed the Continuous Delivery Foundation (CDF) in mid-March. Among the CDF's founding members, which span open source software, platforms and tools, are the following: Alibaba, Autodesk, Capital One, CircleCI, CloudBees, GitLab, Google, Huawei, IBM, JFrog, Netflix, Puppet, Red Hat and SAP.



The Best Decision: Your Future and Serverless Stream Processing

A streaming data processing structure usually comprises of two layers—a storage layer and a processing layer. The former is responsible for ordering large streams of records and facilitating persistence and accessibility at high speeds. The processing layer takes care of data consumption, executing computations, and notifying the storage layer to get rid of already-processed records. Data processing is done for each record incrementally or by matching over sliding time windows. Processed data is then, subjected to streaming analytics operations and the derived information is used to make context-based decisions. For instance, companies can track public sentiment changes on the products by analyzing social media streams continuously—world's most influential nations can intervene in decisive events like presidential elections in other powerful countries—mobile apps can offer personalized recommendations for products based on geo-location of devices, user emotions.


10 Interesting Facts About Chatbots


People don’t really care if your chatbot has a great personality. Especially if that chatbot can’t solve an issue that one of your customers is currently experiencing. Make sure you focus on utility over personality. Forty-eight percent of respondents in the same LivePerson survey said they prefer chatbots that can solve problems. However, don’t forget about speed! Consumers value friendliness and ease of use the most in chatbots, but speed is a close third, according to Aspect’s research. Speed is more important to consumers than having a successful interaction and even accuracy. ... Facebook has evolved a ton over the years. It’s no longer just a place to keep in touch with friends from school and spy on your ex. Now it’s also a place to buy things. In fact, a new model is taking shape — one where people don’t have to click on a link, leave Facebook to visit a traditional company website, add stuff to a shopping cart, and complete a purchase. That’s because 37 percent of people are open to the idea of buying items on the social network, according to the same HubSpot research. And you can be sure that number will continue to grow as more people are exposed to and adopt chatbots.


5G & Industry 4.0 at Hannover Messe 2019

Woman wearing AR lenses interacting with a robotic arm.
Ericsson sees mobile technology as a new foundation to accelerate and support these new technologies. If factories are to enable digital twins of all processes and workflows, reliable wireless capabilities and low latency are a necessity. With 5G, digital twins can be accessed through remote VR monitoring, supporting transparency in factories. For example, one of our interactive proof points is a virtual tour of FCA Mirafiori plant in Torino where the visitor can “move” within it, monitoring key processes for bottlenecks and machinery parameters like vibration and temperature. We also address the challenges of companies with distributed production sites. A common problem is that similar processes perform differently at different locations. To solve it, plants must break down siloes and introduce transparency to optimize and align these processes. With Fraunhofer IPT, Ericsson presents the 5G Production Cockpit, giving a real-time view of processes in Aachen as well as Stockholm, transmitting live data, creating digital twins. With centralized data and analytics, current as well as historical data are compared for deeper insights.


Form a hybrid integration plan for your architecture


The first challenge is to figure out exactly what your requirements are as an architect. You can have a narrow perspective and focus on hybrid integration in the context of a particular project or initiative, or you can have a holistic perspective. And if you have a holistic perspective, it's hard work to figure out exactly what your integration requirements are today and what they will be in the next, let's say, three to five years, because of all these things happening. The second is selecting the appropriate combination of technologies. Architects would love to have one single [hybrid integration] platform that can cover them all, which can connect IoT devices, mobile devices, APIs, cloud, etc. This is difficult. In the market, there are many [hybrid integration] products, but few are good at supporting all these different scenarios. So, identify what is the right combination of technologies that can be used to solve the problem. ... Sometimes, you cannot put the same platform in the three environments. Maybe on-premises, you have more demanding requirements than in the cloud.


Domain-Oriented Observability

"Observability" has a broad scope, from low-level technical metrics through to high-level business key performance indicators (KPIs). On the technical end of the spectrum, we can track things like memory and CPU utilization, network and disk I/O, thread counts, and garbage collection (GC) pauses. On the other end of the spectrum, our business/domain metrics might track things like cart abandonment rate, session duration, or payment failure rate. Because these higher-level metrics are specific to each system, they usually require hand-rolled instrumentation logic. This is in contrast to lower-level technical instrumentation, which is more generic and often is achieved without much modification to a system's codebase beyond perhaps injecting some sort of monitoring agent at boot time. It's also important to note that higher-level, product-oriented metrics are more valuable because, by definition, they more closely reflect that the system is performing toward its intended business goals. By adding instrumentation that tracks these valuable metrics we achieve Domain-Oriented Observability.


Why Cybersecurity Matters: A Lawyer’s Toolkit


The stark reality is that most attorneys are highly independent and singularly focused on servicing their clients, whether as in-house or outside counsel. The extra steps required to access files and applications with oft hard-to-remember (but more secure) passwords are not always congruous with billable hours and around-the-clock attention to deliverables. Lawyers may compromise on security to ensure direct communications with clients on their platform of choice, in pursuit of the almighty billable hour. Another vulnerability is that attorneys crave information, the more the better. This trait is something that savvy hackers understand and will use to their advantage. Email phishing, in that regard, is a frequent tactic. The smart cyber-villain will quickly learn how to dupe attorneys and their assistants by sending attachments and links by email that appear to come from a legitimate source. Once said attachment is opened — bingo! — the malware starts to execute and do the dirty work behind the scenes, scouring the device for desired data points and eventually securing access to an internal network.


Does IT need Devops Managers?


If you read Agile literature, you’ll realize that the reason Agile in general and Scrum in particular promotes the role of a Scrum “Master’ rather than a Manager is because the latter often oversteps his authority and mandate. The effect on the experts is disastrous. They feel ‘controlled’ with no motivation left to appreciate the overall objective of the venture they’re part of and confine themselves just to do what they’re asked to do. This is the beginning of the ‘silo’ mentality — the very mentality Devops is supposed to eliminate. If you look deeper to understand the silo between Dev and Ops, you’ll observe that the silo gets deeper as you go lower in the hierarchy — it’s not so deep at the Dev and Ops management layer. So, the point is that IT Management need to look themselves in the mirror and assess the degree to which they have contributed to this chasm between Dev and Ops. They need to step back from their command and control approach to a far more ‘watch from a distance and protect’ approach. They need to empower the ‘experts’, allow them to mingle, interact and collaborate, show them the big picture, assert confidence in them to solve the big problems and create a win-win platform.


When should I choose between serverless and microservices?


Microservices are best suited for long-running, complex applications that have significant resource and management requirements. You can migrate an existing monolithic application to microservices, which makes it easier to modularly develop features for the application and deploy it in the cloud. Microservices are also a good choice for building e-commerce sites, as they can retain information throughout a transaction and meet the needs of a 24/7 customer base. On the other hand, serverless functions only execute when needed. Once the execution is over, the computing instance that runs the code decommissions itself. Serverless aligns with applications that are event driven, especially when the events are sporadic and the event processing is not resource-intensive. Serverless is a good choice when developers need to deploy fast and there are minimal application scaling concerns. ... As a rule of thumb, choose serverless computing when you need automatic scaling and lower runtime costs, and choose microservices when you need flexibility and want to migrate a legacy application to a modern architecture.



Quote for the day:


"Learn from the mistakes of others. You can never live long enough to make them all yourself." -- Groucho Marx


Daily Tech Digest - April 05, 2019

India’s new Software Products Policy marks a Watershed Moment in its Economic History


It is in this light that the recently rolled out National Software Products Policy (#NSPS) by the Ministry of Electronics & IT (MeitY), Government of India marks a watershed moment. For the very first time, India has officially recognised the fact that software products (as a category) are distinct from software services and need a separate treatment. So dominated was Indian tech sector by outsourcing & IT services, that “products” never got the attention they deserve – as a result that industry never blossomed and was relegated to a tertiary role. Remember that quote – “What can’t be measured, can’t be improved; And what can’t be defined, can’t be measured”. The software policy is in many ways a recognition of this gaping chasm and marks the state’s stated intent to correct the same by defining, measuring and improving the product ecosystem. It’s rollout is the culmination of a long period of public discussions and deliberations where the government engaged with industry stakeholders, Indian companies, multinationals, startups, trade bodies etc to forge it out.


How Lessons from Production Adoption Resulted in a Rewrite of the Service Mesh

Linkerd is an open-source service mesh and Cloud Native Computing Foundation member project. First launched in 2016, it currently powers the production architecture of companies around the globe, from startups like Strava and Planet Labs to large enterprises like Comcast, Expedia, Ask, and Chase Bank. Linkerd provides observability, reliability, and security features for microservice applications. Crucially, it provides this functionality at the platform layer. This means that Linkerd’s features are uniformly available across all services, regardless of implementation or deployment, and are provided to platform owners in a way that’s largely independent of the roadmaps or technical choices of the developer teams. For example, Linkerd can add TLS to connections between services and allow the platform owner to configure the way that certificates are generated, shared, and validated without needing to insert TLS-related work into the roadmaps of the developer teams for each service.


How to get your company’s people invested in transformation


Transformation, driven by new industrial platforms, geopolitical shifts, global competition, and changing consumer demand, is front-page news because it moves share prices, tests leadership ability and mettle, and creates new business models that change how whole sectors operate. But we rarely talk about the people who live through and help drive these often-wrenching changes. Can a global company successfully transform without bringing along its 30,000 employees? I doubt it. The human dimension is profoundly important. But too often, it’s forgotten or under-recognized in the rush to restructure or launch initiatives. Leaders who can engage emotionally with employees and humanize change initiatives by creating inspiration and innovation are most likely to succeed. This may sound obvious, but is a challenge for type A leaders who overly emphasize process, effort, and control. Transformational change often requires leaders to adopt an “antihero” style, characterised by empathy, humility, self-awareness, flexibility, and an ability to acknowledge uncertainty.


Secure Your Migration to the Cloud


Having a clearly defined and enforceable data lifecycle strategy, ensuring data is protected in transit and at rest, is one of the most important aspects of any cloud migration. You need to understand what sensitive data you are migrating and leverage the tools and processes to keep it protected, including cloud access security brokers (CASB). A cloud access security broker, according to Gartner, “is an on-premises or cloud-based security policy enforcement point that is placed between cloud service consumers and cloud service providers to combine and interject enterprise security policies as cloud-based resources are accessed.” CASBs are powerful tools because they give you a centralized view of all your cloud resources. Many IT teams that deploy a CASB for the first time realize that there are many cloud resources in use that they were previously unaware of, some of which may be placing sensitive data at risk. By using CASBs and other tools, you can regain visibility into where data resides and apply the proper safeguards to keep it protected.



The Matrix at 20: A Metaphor for Today's Cybersecurity Challenges

Shape-shifting is core to the movie's plot. "Agents," Neo's sinister enemies, take over the bodies of innocent bystanders in their relentless pursuit of Neo and his crew. The cybersecurity analogy here is an advanced persistent threat (APT) group utilizing stolen credentials to gain a foothold into an organization — one of the most pernicious elements facing today's enterprise. Modern breaches often involve malicious APT-like agents gaining access to an employee's credentials in order to achieve their goal. This usually happens as a result of spearphishing attempts, enabling attackers to steal customer data, intellectual property, or financial and banking data. Just as Neo stays vigilant in looking for constant threats, CISOs fight the epidemic of stolen credentials with proactive risk-based authentication techniques that stop attackers from even obtaining a foothold in the first place. The key in both situations is having visibility into attacker behavior. As Neo begins his journey to the "real world," a jaded crew member, Cypher, asks, "Why, oh why, didn't I take the blue pill?"


Bolster enterprise application support from dev to deployment


Declarative models help teams discover how code has deviated from the declared goal state. Use the diff feature in these tools -- some examples include kubectl diff for Kubernetes and the --diff option in Ansible playbooks -- to compare current and goal-state conditions and alert you to differences. Jenkins is a common automation server to integrate CI/CD with a repository, with competitors such as Integrity, GoCD or GitLab CI/CD. When choosing a tool, ensure that it will integrate with your organization's repository to maintain that single source of truth. There are broad management toolkits available for IT organizations that don't create tool integrations internally. For example, Weaveworks offers an integrated tool and a GitOps-centric distribution of Kubernetes, and it recommends and supports tool integrations for the latter. Diamanti and Rancher Labs have similar container platform capabilities. Start a GitOps approach to enterprise application support with a review of the components and capabilities of a managed toolkit, then weigh the benefits of a single-source option or put together a collection of tools to meet your specific requirements.


AI pioneer: ‘The dangers of abuse are very real’

Killer drones are a big concern. There is a moral question, and a security question. Another example is surveillance — which you could argue has potential positive benefits. But the dangers of abuse, especially by authoritarian governments, are very real. Essentially, AI is a tool that can be used by those in power to keep that power, and to increase it. Another issue is that AI can amplify discrimination and biases, such as gender or racial discrimination, because those are present in the data the technology is trained on, reflecting people’s behaviour. ... Deep learning, as it is now, has made huge progress in perception, but it hasn’t delivered yet on systems that can discover high-level representations — the kind of concepts we use in language. Humans are able to use those high-level concepts to generalize in powerful ways. That’s something that even babies can do, but machine learning is very bad at.


Reliance Jio’s latest acquisition is a $100M bet on the future of internet users in India

Reliance Jio Become Number Two In India
Jio’s aggressive data plan strategy, which started with free voice calls and free 4G data, disrupted India’s telecom market and forced the incumbents to move quicker and reduce prices — mobile data is reportedly now cheaper in India than anywhere else on the planet. It was, of course, a huge hit with consumers. The operator has consistently led on 4G subscriber numbers and it is ranked third overall with over 280 million customers, or around 23 percent market share. Clearly, keeping up with what’s next is a critical part of its plan to grow bigger still. Vaish said Haptik wasn’t under pressure to sell but the team found an “ideal match in terms of philosophy” with Jio, which is also exploring alternative ways to enable consumers to interact with its devices and service. The company has a ‘Hello Jio’ assistant on its devices, and Haptik may help it further its strategy in the future although Vaish said that hasn’t been nailed down at this point. Jio is allowing Haptik to continue to work with customers because, at this point, enterprise services are the “only proven business” for conversational platforms, Vaish said.



Sorry, graphene—borophene is the new wonder material that’s got everyone excited


This exotic substance wasn’t synthesized until 2015, using chemical vapor deposition. This is a process in which a hot gas of boron atoms condenses onto a cool surface of pure silver. The regular arrangement of silver atoms forces boron atoms into a similar pattern, each binding to as many as six other atoms to create a flat hexagonal structure. However, a significant proportion of boron atoms bind only with four or five other atoms, and this creates vacancies in the structure. The pattern of vacancies is what gives borophene crystals their unique properties. Since borophene’s synthesis, chemists have been eagerly characterizing its properties. Borophene turns out to be stronger than graphene, and more flexible. It a good conductor of both electricity and heat, and it also superconducts. These properties vary depending on the material’s orientation and the arrangement of vacancies. This makes it “tunable,” at least in principle. That’s one reason chemists are so excited. Borophene is also light and fairly reactive. That makes it a good candidate for storing metal ions in batteries.


Discovering Culture through Artifacts

First of all, managers have power over their team. This power often takes the form of rewards (pay raises, promotions, etc) and punishments (bad performance reviews, terminations, etc.). In both cases, a manager is rewarding and punishing members of his team based on the behavior. It is through these incentives and disincentives that the culture of a team, organization and/or company is defined. Members of the team learn through observation of which behavior is rewarded/punished, and tailor their own behavior in turn. Interestingly enough, it doesn’t matter what a manager says, but rather which behaviors they reward/punish. A second means by which a manager can influence the culture of a team is by modeling the behavior they want the team to exhibit. One can learn a lot by observing the behavior of others, especially when that person is in a position of power or influence. I see this a lot when it comes to modeling behavior around giving and receiving feedback. Great managers know how to listen and thank people for feedback, and modeling this behavior for their team.



Quote for the day:


Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance. - Thom S. Rainer


Daily Tech Digest - April 04, 2019

Is it too soon for AI in the education landscape?


Even if schools did have enough money, not only is their choice of software limited, but many heads and teachers are neither trained nor qualified to either select or use even basic educational technology, let alone AI tools. There is also a widespread fear of the unknown, part of which includes the much-discussed issue of jobs being automated out. Another major concern relates to ethics, believes Elena Sinel, who is a member of the All Parliamentary Group on AI and also founder of Acorn Aspirations and Teens in AI, which provide various forums for young people to learn tech skills. A key challenge in this context is in ensuring AI does not end up doing “more harm than good”, she says. “So it’s about looking at who is accountable if things go wrong – for example, what happens if there’s a data leak and who is ultimately in charge of the data? Or what happens if AI doesn’t assess students fairly or accurately in exams, for instance?” says Sinel. Such questions also fit into a wider debate around whether schools are currently set up to provide young people with the skills required for the workplace of the future, or whether fundamental change is required.



Prepare Now for Next-Generation Cyber Threats

Impacts will be felt across a range of industries. Malicious attacks may result in automated vehicles changing direction unexpectedly, high-frequency trading applications making poor financial decisions, and airport facial recognition software failing to recognize terrorists. Where machine learning systems are compromised, organizations will face significant financial, regulatory, and reputational damage, and lives will be put at risk. Nation states, terrorists, hacking groups, hacktivists, and even rogue competitors will turn their attention to manipulating machine learning systems that underpin products and services. Attacks that are undetectable by humans will target the integrity of information. Widespread chaos will ensue for those dependent on services powered primarily by machine learning. Companies should assess their offerings and dependency on machine learning systems before attackers exploit related vulnerabilities.



Rethinking reskilling: How to find key hidden talent within your organization  


To overcome the talent gap and foster adaptive workforces able to keep up with ongoing transformations in tech and industry, there is a clear need to shift from traditional L&D techniques like seminars and online training sessions, to leveraging existing experts within the organization so that we harness the collective intelligence of individuals and teams. These are employees who often already have the skills and knowledge that others need and follow the development of those fields closely. As a result, they can curate and contextualize that knowledge better than any external teacher, hence making it easier to for others to absorb it. Companies also need to tap what can be a hidden resource of knowledge, identifying “invisible” go-to resources; i.e., knowledgeable employees who may be currently unrecognized or perhaps are not even hierarchically high in the company structure, but seem to be go-to people for large networks of employees. Organizations can consider practices similar to Genpact’s Genome reskilling initiative, which uses advanced human network analysis techniques to identify these invaluable knowledge leaders outside of the usual suspects of widely known company subject matter experts.


A Framework for High-Value Big Data

More and more companies are achieving the monetization of data by improving efficiencies, developing new products, growing new markets, and by reducing risks. Saxena talked about Netflix's original series like Orange is the New Black that are a direct result of data-driven innovation. She elaborated on the big data framework elements. Organization maturity is about hard assets in an organization, like its strategy, data, quality etc. Every organization should have a business strategy, as well as a data strategy. The internal competencies are about people, and focus on soft assets like leadership, engagement, and adaptability. Health care organizations in the field of precision health like Geisinger are taking advantage of big data and genomic sequencing to transform healthcare practices, in order to prevent people from becoming sick and to treat people more as individuals (customers), rather than just patients. Data governance initiatives should include aspects of data integration, quality, accessibility and data security.


Leading DevOps program Chef goes all in with open source


What does that mean for Chef's customers? Jacob said, "Chef Software produces only open-source software projects, in the commons. It distributes that software as an enterprise product. For current Chef Software customers, nothing changes. For enterprise users of Chef products who are not customers, they can decide to either pay for Chef's distribution, or they can make or consume an alternative." Going deeper in the new Chef FAQ, Chef stated: "We will begin to attach commercial license terms to our software distribution (binaries) with the next major release." So, if you download and compile the code yourself, you're welcome to use it. But, if you download the binaries, you'll must pay for them. If that sounds familiar, it should. It's a variation of how Red Hat and SUSE, for example, release their enterprise Linux distributions.  . . . For existing commercial customers there will be no immediate changes until their next renewal when they will get licensed onto new SKU's representing the same core products."


Bitcoin, BlackRock And The Rise Of Alternatives

Bitcoin
As an alternative asset, the appeal of crypto is that its movements are uncorrelated with the rest of the market, says Mark Yusko, CEO of Morgan Creek Capital Management, which oversees $1.5 billion in assets, including a $40 million blockchain-focused VC fund. “Stocks or bonds derive their value from factors like GDP growth, profitability and interest rates. A cryptocurrency network derives its value from usage growth, adoption, regulation and technology. All of those things are uncorrelated with traditional measures of stocks and bonds.” ... Yusko claims inbound interest from institutional investors is growing. This week, he’s meeting with a California municipal pension fund. He adds that more institutional-investor conferences are including talks on cryptocurrencies. Teddy Fusaro, chief operating officer of Bitwise, a San Francisco digital asset manager and creator of the first crypto index fund, says institutional investors are showing increasing sophistication. “A year ago,” he says, “the conversation might have been, ‘How do we know bitcoin is going to survive?’ Or ‘Who is the CEO of bitcoin?’


6 Essential Skills Cybersecurity Pros Need to Develop in 2019

Image Source: Adobe Stock (vchalup)
On their face, these stats may engender a bit of complacency from cybersecurity professionals. It would only be natural to figure that anybody with a pulse and some security experience has got it made. But here's the rub. Many disruptive forces are at play that are set to drastically change the way security duties are carried out in the coming years. New security automation platforms, new architectures, and complex hybrid cloud implementations require major shifts in bread-and-butter security technical knowledge. Not only is security technology changing rapidly, but so are many of the fundamental roles held by cybersecurity professionals. Tons of emerging technologies and pervasive use of the Internet of Things are touching every aspect of business operating models, and software delivery is becoming more agile and embedded into lines of business. As a result, security pros are tasked to take positions requiring more consultative leadership and more enablement of democratized security across the organization.


What Is a Scaleup Company and How Is It Different from a Startup?

mimi-thian-scaleup-startup-company-definition-vc-investment-article-explanation-two-women-working-together-laptops
From a venture capital and entrepreneurial perspective, a scaleup company is considered to be in a later growth phase, after successfully maneuvering through the period of being a startup and having established a sustainable business model with a positive outlook on organizational growth and improvements of the profitability. For additional information on this aspect, you can also have a look at “How to Upscale Like a Boss“. It does not take much to “found” a startup company. Anybody with an interesting idea can register a company which then could be considered to be a startup. It then either fails or becomes successful after a lot of hard work. The question is more on… When does a company stop being a startup? As soon as the startup company has finished an MVP (minimum viable product) and has a stable monthly income, which is hopefully more than the company’s expenses, the organization ceases to be a startup. And that’s a good thing. Being a startup company is nothing good and nothing aspirational. To read more about the exit of this phase, you can also read our article “When Does a Company Stop Being a Startup?“.


Joining Human And Artificial Intelligence

Human and artificial intelligence
Although the aim of AI is to imitate HI to the point where both are indistinguishable, AI and HI are fundamentally different. Human intelligence learns via the senses and past experiences. They are also emotionally intelligent, which is something that AI is yet to crack. But AI is analytical and logical in a way that humans aren’t, and with this, it is capable of formulating and processing in ways that humans can’t. AI can take huge datasets and whittle them down to snippets of relevant information quickly. It can complete tasks in minutes as opposed to days, and it can identify data discrepancies that humans would never spot. Artificial and human intelligence is a match made in business heaven. The AI-HI model is already in practice across a number of sectors. In healthcare, clinical decisions are aided by artificially intelligent systems that search through historical data at a pace that human professionals never could. But, that said, getting a diagnosis direct from AI would be a very different experience to getting it from a doctor or nurse. Naturally you need both – AI augmenting human intelligence can lead to increased efficiency and accuracy.


How the data mining of failure could teach us the secrets of success

Since learning should reduce the number of attempts required before achieving success, it should lead to a narrower distribution of failure streaks than the exponential form predicted by the chance model. But to the surprise of Yin and co, failure streaks do not follow this pattern either. In fact, they have a much fatter-tailed distribution. “These observations demonstrate that neither chance nor learning alone can explain the empirical patterns underlying failures,” the researchers say. So what other factors are important? To find out, Yin and co modeled the way people learn from experience and how this influences their next attempt. In particular, they modeled whether people take into account all their previous experiences or just some of them. The resulting model considers a complete range of learning—from agents who take all their past experience into account to those who do not take any of their past experience into account, and everything in between. The team say the model predicts a phase change in the behavior that matches the empirical data.



Quote for the day:


"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne


Daily Tech Digest - April 02, 2019

Adopting cloud is not simply a case of lifting and shifting workloads to a designated cloud provider; it also encompasses working out the migration costs of moving infrastructure to the cloud. Also, the applications earmarked for migration also need to be developed for use in the cloud, and companies trying to retrofit their existing ones to fit such an environment will have a huge uphill battle. For that reason, administrators working in greenfield sites have a major advantage over those dealing with brownfield infrastructure. And planning is the absolute make or break requirement for a successful cloud deployment. It is important to be realistic about application requirements. It may be simple to say “scale as required”, but that usually comes with a cost that needs to be worked out ahead of time – not just the actual instance cost, but also the technological development and technical debt it will incur. Scaling cannot just be thrown out ad hoc – testing, testing and more testing is key. Also, not everyone needs auto-scaling, so be honest about the organisation’s requirements. Features cost money, and waste money when they are not used.


The Impact and Ethics of Conversational Artificial Intelligence

Both a recent study from Carnegie Mellon University and a recent Amazon patent for "Voice-based determination of physical and emotional characteristics of users" indicate that far more information can gleaned from your voice than you thought possible. Perhaps you could already guess that voice analysis can reveal things like your gender or emotions. Do you realize that your height, weight, physical health, mental state, and physical location could also be confidently determined? The Carnegie Mellon study suggested they could even make a fairly accurate 3-D representation of your face, just from your voice. However, while Carnegie Mellon suggests that this could be used for law enforcement such as for identifying hoax callers, Amazon is planning to use it to tailor purchase suggestions — for instance, offering to sell you cough drops if it recognizes that you have a cold. Using this type of analysis would allow our digital assistants to be much more in tune with us. Amazon announced in 2018 that Alexa was going to start acting on “hunches” so that it would every so often make an unprompted suggestion.


Hackers reveal how to trick a Tesla into steering towards oncoming traffic


The problem lay within the single neural network which Tesla uses to detect lanes, among other functions. Images from a camera are processed, input into the network, and output is then saved and added to a virtual map of the vehicle's surroundings. While a controller manages the car's auto-steering decisions, the researchers created an attack scenario in which the feed images were compromised by way of three stickers on the road, which led to the car's trajectory changing. By applying small, inconspicuous stickers to the road, the system failed to notice that the fake lane was directed towards another lane -- a scenario the team says could have serious real-world consequences. The vulnerability and security weaknesses found by Tencent were reported to Tesla and have now been resolved. The findings were shared with attendees of Black Hat USA 2018. "With some physical environment decorations, we can interfere or to some extent control the vehicle without connecting to the vehicle physically or remotely," the team says.


Meta Networks builds user security into its Network-as-a-Service


Ever since its launch about a year ago, Meta Networks has staked security as its primary value-add. What’s different about the Meta NaaS is the philosophy that the network is built around users, not around specific sites or offices. Meta Networks does this by building a software-defined perimeter (SDP) for each user, giving workers micro-segmented access to only the applications and network resources they need. The vendor was a little ahead of its time with SDP, but the market is starting to catch up. Companies are beginning to show interest in SDP as a VPN replacement or VPN alternative. Meta NaaS has a zero-trust architecture where each user is bound by an SDP. Each user has a unique, fixed identity no matter from where they connect to this network. The SDP security framework allows one-to-one network connections that are dynamically created on demand between the user and the specific resources they need to access. Everything else on the NaaS is invisible to the user.


Why so many organizations sideline Internet of Things strategies

Any discussion about the IoT starts with a simple but often overlooked fact: Objects and assets possess no inherent intelligence. It’s all about the “smarts” humans build into them. Consequently, a dozen — or a million — smart devices operating within separate but disconnected systems won’t have the same impact and value as a collection of devices and systems that work together synergistically. In order to slide the dial from tactical to strategic, an enterprise must focus on identifying value points, determining how data can help unlock that value, and connecting the right devices and systems in the right way. When an enterprise pinpoints value — for customers, employees, partners and others — it suddenly holds a map and a compass that points to specific devices, tools, technologies and solutions. However, an IoT platform must also be flexible and agile enough to support changes in devices, software and the overall business environment. Fast pivots and modular deployments — what many describe as agile environments — are now paramount.


Why women still make up only 24% of cybersecurity pros

istock-949581062.jpg
Despite more women entering and succeeding in the cybersecurity field, pay inequalities persist, the report found. While 29% of men in the field report annual salaries between $50,000-$90,000, only 17% of women do the same. Some 20% of men in cyber earn between $100,000-$499,999, compared to 15% of women. Both male and female cybersecurity professionals share many of the same concerns about their roles, including lack of commitment from upper management, the reputation of their organization, the risk of seeing their job outsourced, a lack of work-life balance, the threat of artificial intelligence (AI) reducing the need for their role, and a lack of standardized cybersecurity terminology to effectively communicate within their organization. "It's an encouraging sign that more women are succeeding in cybersecurity and moving up through the ranks," Jennifer Minella, vice president of engineering and security at Carolina Advanced Digital, Inc. and chairperson of the (ISC)² board of directors, said in the release.


Zuckerberg calls for new internet regulation


Zuckerberg said effective privacy and data protection required a globally harmonised framework. “People around the world have called for comprehensive privacy regulation in line with the European Union’s General Data Protection Regulation (GDPR), and I agree. I believe it would be good for the internet if more countries adopted regulation such as GDPR as a common framework.” New privacy regulation around the world, he said, should build on the protections GDPR provides, it should protect individuals’ rights to choose how their information is used – while enabling companies to use information for safety purposes and to provide services – it should not require data to be stored locally, and it should establish a way to hold companies such as Facebook accountable by imposing sanctions when they make mistakes. “I also believe a common global framework – rather than regulation that varies significantly by country and state – will ensure that the internet does not get fractured, entrepreneurs can build products that serve everyone, and everyone gets the same protections,” Zuckerberg wrote.


Kubernetes Secrets Management

A Kubernetes Secret is mainly designed to carry sensitive information that the web service needs to run. This includes information such as username and password, tokens for connecting with other pods, and certificate keys. Putting sensitive information in a Secret object allows for better security and tighter control over those details. Secrets are also easy to integrate with existing services. You just have to tell the pods to use the custom Secrets you have created alongside the native Secrets created by Kubernetes. This means you can use Secrets to make deploying a web service across multiple clusters easier. It is also worth noting that Secrets can are base64 encoded for ‘encryption’ purposes. You can convert strings or values into base64 and revert them back before use. The encoding/decoding process is already built into Kubernetes, eliminating the need for third-party tools when adding this extra layer of security. Storing sensitive environment variables becomes more seamless. It’s important not to commit base64-encoded Secrets, as they can be easily decoded by anyone.


When Wi-Fi is mission-critical, a mixed-channel architecture is the best option

When Wi-Fi is mission-critical, a mixed-channel architecture is the best option
For many carpeted offices, multi-channel Wi-Fi is likely to be solid, but there are some environments where external circumstances will impact performance. A good example of this is a multi-tenant building in which there are multiple Wi-Fi networks transmitting on the same channel and interfering with one another. Another example is a hospital where there are many campus workers moving between APs. The client will also try to connect to the best AP, causing the client to continually disconnect and reconnect resulting in dropped sessions. Then there are environments such as schools, airports, and conference facilities where there is a high number of transient devices and multi-channel can struggle to keep up. ... There has been recent innovation from the manufacturers of single-channel systems that mix channel architectures, creating a “best of both worlds” deployment that offers the throughput of multi-channel with the reliability of single-channel. For example, Allied Telesis offers Hybrid APs that can operate in multi-channel and single-channel mode simultaneously.


Building High-Quality Products With Distributed Teams

Another aspect of making a high-quality product is the testing process. She mentioned having a mature testing process with the test plan and automatic, integration, load, and stress testing, which allows identifying the issues as soon as possible, not at the very last moment. Her advice on developing high-quality products is to make quality your priority and make decisions based on this priority. It means having a mature quality process and having the best software testing engineers in your team, she argued, and working with risks; not ignoring, but mitigating. Gorbachik suggested making daily decisions from your high-quality product perspective. For example, you have a choice: deliver the product earlier without automated tests coverage, or deliver the product later, but cover it with automated tests. If your main target is a high-quality product, then option 2 (deliver the product later, but cover it with automated tests) is your choice, she argued.



Quote for the day:


"Management is efficiency in climbing the ladder of success; leadership determines whether the ladder is leaning against the right wall." -- Stephen Covey