Don’t allow chaos to take over. Ever heard the term, “organized chaos”? Transforming your team will likely mean you’re changing every person’s role in your organization. It’s challenging. There will be a point in time where no one will quite know what they should be doing. And that’s OK. But it’s also where planning comes in. The planning process helps you think through any potential drawbacks and anticipate where there may be friction. This means continually looking for opportunities to evolve your processes or replace them with new ones. Failure to spend enough time planning can lead to breakdowns, which can affect your systems availability or important programs. This could spell disaster, so be sure to spend enough time in the planning stage and organize the transformation as much as possible. There may be those in your organization who feel that jumping in feet first and making the changes very quickly is the best way to overcome the naysayers and show progress. Many times, the senior management team or your board may support this path. However, it can be the quickest way to fail.
Researchers at cyber security firm Check Point have discovered a vulnerability in Qualcomm chipset, which could allow attackers to have unauthorised access to sensitive data. The vulnerability (CVE-2019-10574) exists in Qualcomm's Secure Execution Environment (QSEE), an implementation of Trusted Execution Environment (TEE) based on ARM TrustZone technology. QSEE, more commonly known as Qualcomm Secure World, is a secured area present on the main processor. The purpose of creating this hardware-protected space is to secure sensitive information, such as passwords, payment card credentials and encryption keys, from unauthorised access. ARM TrustZone has now become an integral part of all modern mobile devices. These devices come with specialised, trusted components that handle movement from device's Rich Execution Environment (REE) to TEE. In this way, the hardware-based security capabilities of the TEE can be prevented from being compromised by software or apps outside the trusted zone.
Shifting to microservices can be done in one of two ways. The first option is to keep a solid monolithic base and start building microservices around it. The second option is to iteratively transform whole applications to microservices. In either case, teams need to identify the boundaries of each microservice — they must encapsulate each business function as a ‘bounded context.’ To do so, teams must minimise dependencies of newly formed microservices to monolith applications. They must establish service-to-service intercommunication outside monoliths and begin fostering trust in a new, decomposed application environment. In this setting, they can extract bounded contexts to a single microservice and its database. ... Deploying microservices in this way increases the organisation’s ability to provide cross-unit and cross-application functions. Companies can create a perpetual evolution of their architecture and support new business processes by enforcing the established boundaries between new and existing modules as well.
While functional programming falls outside of the mainstream code languages, developers and architects interested in it should consider three ways to implement it: as part of a functional architecture, as part of an isolated or independent architecture, or as part of a hybrid programming model. Fundamentally, a pure functional programming language should not retain state and is more like a math expression than a procedural program. This architecture works for compiler construction or, perhaps, for APIs. A program to shorten and forward a URL, for example, might better fit a pure functional language than other, more common approaches. List processing, or LISP, is an impure functional language in that it can mix in traditional procedural programming along with the functional approach by using states and control flow. Unfortunately, few applications tend to fit a pure functional approach, and few programmers want to program in a mixed language like LISP. Software architectures, however, allow for two other functional programming approaches that fit some projects.
Retailers are on the hunt for data scientists, now more than ever. Given the rise in online shopping and the cut-throat competition from e-commerce giant Amazon, smaller retailers have begun closing their physical locations around the world. Dubbed the “retail apocalypse,” 8,600 stores will close in 2019 alone. Studies show that retailers are also forced to shift their sales strategies, offering more personalized online experiences to customers. Given this shift, retailers are actively seeking candidates in the data world who can help capture customer loyalty and keep sales high. The shift to a more data-centric approach in retail is not necessarily new, though there has been a big push in recent years. Retail giant Target Corporation arguably led the charge when, in 2013, the company hired Paritosh Desai as vice president of business intelligence, analytics and testing. Not only did Desai hire a robust data team, but he also created a data-driven culture company-wide. He established fluidity between the data team and managers by creating an analytics system that managers could use themselves, promoting data-driven decision-making across the board.
As security moves into the cloud, that team is going to be responsible for rebuilding that infrastructure in the cloud, and if security isn't a part of the conversations around this infrastructure, organizations are missing a huge opportunity. When organizations decide they want to do DevSecOps, they turn to a team, be it development, operations, or security, and tell them they need to get on board with transforming, often without the proper skills, resources, or guidelines. You need to know your DevOps teams' comfort level with security, and around digital transformation. For example, if they don't know about serverless infrastructure, beyond the obvious, then you're in for trouble. Expecting a team to exclusively learn on the fly is basing a strategy on hope, which is always doomed to fail. Instead, take your spare moments and offer your DevSecOps team opportunities to learn from their blind spots, whether with additional certifications or shadowing. It doesn't have to be perfect, but every bit helps.
"As organizations continue to grapple with complex digital transformation initiatives, flexibility and security are critical components to enable seamless and reliable cloud adoption," said Wendy Pfeiffer, CIO of Nutanix, in a statement. "The enterprise has progressed in its understanding and adoption of hybrid cloud, but there is still work to do when it comes to reaping all of its benefits. In the next few years, we'll see businesses rethinking how to best utilize hybrid cloud, including hiring for hybrid computing skills and reskilling IT teams to keep up with emerging technologies," she added. More than 80% of respondents told the survey that hybrid cloud environments were the ideal model for IT operations, especially in the Americas. Three out of every five IT managers surveyed said flexibility and mobility are some of the main features they look for in a cloud system, and the report said, "cherry-picking infrastructure in this way to match the right resources to each workload as needs change results in a growing mixture of on- and off-prem cloud resources, like the hybrid cloud."
For some years embedded processors have had the ability to vary their operating frequency and supply voltage based on workload. Essentially, a processor’s core can run slower when it isn’t busy; scaling back the main clock frequency directly translates to fewer transistors switching on and off per second, which saves power. When the core really needs to get busy, the clock frequency is scaled up, increasing the throughput. There is a relationship between supply voltage and clock frequency; by reducing both, the amount of power conserved is amplified. This kind of scaling isn’t going to be enough to deliver the power and performance needed in the embedded devices now being developed to run ML models. That’s because the way we measure performance is going to change. Right now, processors are typically measured in terms of operations per second; we’re now measuring that in teraops, or trillions of operations per second (TOPS). Using TOPS to measure the performance of a processor executing inferences won’t make as much sense as it does when executing sequential code, because the way the model runs isn’t directly comparable to regular embedded software. ML processors will be measured on the accuracy they achieve when delivering a given number of inferences per second for a given amount of power.
There's little doubt that connected workers are the future, but one thing that employees and unions should be mindful of is the possibility of mission creep. Sure, IoT wearables are now helping workers stay safe and helping them be more efficient in their work, but there's a risk that this seemingly innocent beginning will provide the groundwork for the gradual yet inevitable encroachment of smart technology into most or all aspects of an employee's day. In the future, wearables and smart tech may be used to push employer control over employees to excessive, even counterproductive levels. Does this sound like an exaggerated prediction? Maybe, but there are signs that at least some companies may end up moving in a recognisably dystopian direction. Most notably, Amazon patented a wristband in 2018 that tracks employee movements within warehouses, and that even uses ultrasonic detectors and vibrations to direct workers' hands in the right direction of ordered items. Coupled with reports of how Amazon summarily and routinely fires employees who don't labor speedily enough, this kind of development invokes a future where IoT is exploited by employers to increasingly tighten the yoke they tie around the necks of their employees.
Is your organization still in a state of flux on how to leverage this trend? Or, are you among the innovators who are inclined to adopt cloud-first strategies and encash the cloud opportunity? For most of the SMBs experiencing the high operational cost of IT infrastructure and compromised app performance, migration to the cloud environment seems like a lucrative option. With the multitude of benefits such as the pay-as-you-go purchase model, enhanced collaboration with globally distributed teams, robust database backup, seamless implementation of the disaster recovery system, and faster application implementation—cloud migration is the right mainstream strategy for any evolving business. But for a cloud newbie, considering to migrate their first workload to the cloud, a little consideration into prerequisites and caution in implementation will ensure that they can maximize cloud investments. A stalled cloud implementation will increase cost, lead to loss of sensitive information and operational disruption. While the implementation of any new technology is bound to encounter minor glitches, complying with the below-listed recommendations can help in minimizing errors.
Quote for the day:
"If a leader loves you, he makes sure you build your house on rock." -- Ugandan Proverb