Daily Tech Digest - February 21, 2020

Cloud-enabled threats are on the rise, sensitive data is moving between cloud apps

cloud-enabled threats
“We are seeing increasingly complex threat techniques being used across cloud applications, spanning from cloud phishing and malware delivery, to cloud command and control and ultimately cloud data exfiltration,” said Ray Canzanese, Threat Research Director at Netskope. “Our research shows the sophistication and scale of the cloud enabled kill chain increasing, requiring security defenses that understand thousands of cloud apps to keep pace with attackers and block cloud threats. For these reasons, any enterprise using the cloud needs to modernize and extend their security architectures.” 89% of enterprise users are in the cloud, actively using at least one cloud app every day. Cloud storage, collaboration, and webmail apps are among the most popular in use. Enterprises also use a variety of apps in those categories – 142 on average – indicating that while enterprises may officially sanction a handful of apps, users tend to gravitate toward a much wider set in their day-to-day activities. Overall, the average enterprise uses over 2,400 distinct cloud services and apps.

Move beyond digital transformation — and improve your ROI

How do you achieve value across an entire digital enterprise and make sure all investments give you that coveted, but sometimes elusive, ROI? You need to do more than transform. You need to transcend traditional approaches to growth and change. As part of PwC’s 2020 Global Digital IQ research, we studied thousands of companies and their digital behaviors. We found that just 5 percent are getting moderate or significant payback from their digital efforts in all areas measured: growth, profits, innovation, customer experience, brand lift, attracting and retaining talent, disrupting their own industry, using data to improve decisions, cutting costs, and combating new industry entrants. This elite group of companies — what we call Transcenders —achieve real payback across their enterprises. They embrace innovation, and they don’t fear change. If this were high school, they’d reign as prom queen, star quarterback, and valedictorian all rolled into one. What does it take to transcend? Four core differentiators deliver consistent, standout performance. And they’re elements many leaders talk about but don’t always act on or get full value from.

Head shot serious puzzled African American businessman looking at laptop
It's easy to see a digital transformation business strategy as a fun-filled ride into the future and envision the onslaught of high-fives when new technology (and the associated technology leader) have repositioned the company for change, growth and becoming a digital business. Reading the marketing pitch on digital transformation, it's easy to assume that you buy the right technology and perhaps some services, and after a few months, you arrive in the land of rainbows and unicorns. What's often left out of these stories are two salient facts. First, technology by itself has rarely transformed a business. Kodak invented many of the core technologies for digital photography, but chose to shelve them for a variety of reasons, not the least of which was a concern about cannibalizing its core business. The DVD was a widely available technology, but using it to create a novel business model of sending movies by mail helped Netflix-- with its super-easy customer experience--overtake video giant Blockbuster, who clung to its store-based ways.

Home Affairs pushes back against encryption law proposals

The Independent National Security Legislation Monitor (INSLM), Dr James Renwick, went further during public hearings in Canberra this week. Not only did he propose tougher independent oversight of TOLA actions, he repeatedly expressed his concern that the Attorney and the Minister didn't constitute an independent "double lock" for authorising TCNs. Such a double lock is required in the UK, where the equivalent to a TCN must be approved by both the Secretary of State for Home Affairs and the independent Investigatory Powers Commissioner's Office (IPCO). "Leaving aside the personalities and the people who might fill those offices from time to time, nevertheless the Attorney and the Minister for Communications are both members of the same government and the same cabinet," Renwick said on Friday. "There's at least some administrative law which suggests that in those circumstances, they might both be bound by a cabinet decision." Hamish Hansford, DHA's Acting Deputy Secretary for Policy, rejected that view. "Notwithstanding both an Attorney and Minister for Communications are members of a cabinet, they are also independent decision-makers under statute, and they need to exercise those responsibilities independently, if you like," he said.

Looking at the future of identity access management (IAM)

MFA is already popular among some enterprise technologies and consumer applications handling sensitive, personal data (e.g., financial, healthcare), and will continue to transform authentication attempts. A lot has been said about increased password complexities, but human error is still persistent. The addition of MFA immediately adds further security to authentication attempts by having the user enter a temporarily valid pin code or verify their identity by other methods. An area to watch within MFA is the delivery method. For example, SMS notifications were the first stand-out but forced some organizations to weigh added costs that messaging might bring on their mobile phone plans. SMS remains prevalent, but all things adapt, and hackers’ increased ability to hijack these messages have made their delivery less secure. Universal one-time password (OTP) clients, such as Google Authenticator, have both increased security and made the adoption of MFA policies much easier through time-sensitive pin codes. Universal OTPs also do away with the requirement for every unique resource to support its own MFA method.

Forget the Internet of Things. Here’s what IoT really stands for

Intelligence of things looks less like the restroom in Ethiopia, and more like Hartsfield-Jackson Atlanta International Airport, where the world’s largest toilet maker, Toto, has taken things a step further. There, too, the bathrooms are studded with sensors, from the urinals to the faucets. But they don’t just flush automatically, they all report back to central cloud database. The volume of data is astounding – a single toilet may flush 5,000 times per day. In aggregate, the airport can use this data to predict “rush hour” for the airport bathrooms, and deploy custodians before and after to make sure the toilets are clean, the paper towels are stocked, and everything’s running smoothly. “The last decade was about connectivity, and we describe that dynamic with the Internet of Things,” Steve Koenig, vice president of research at the Consumer Technology Association, told Digital Trends. “This decade is really about adding intelligence to different devices, services, etc. We’re confronted with a new IoT: The intelligence of things.” ... “Without intelligence, there is no value,” Kiva Allgood, head of IoT and automotive at Ericsson, told Digital Trends.

How healthcare CIOs can keep their organisations secure

How healthcare CIOs can keep their organisations secure image
For healthcare environments, ransomware poses one of the scariest types of threats in the entire cyber security arena. Physicians-in-training get a taste of the potential reality during routine training exercises at Maricopa Medical Center. As trainees attempt to use diagnostic equipment, like CT scanners, in resuscitating “patient” dummies, they’re greeted with ransomware lockout messages onscreen demanding Bitcoin payments before the equipment can be used again. They must use their intuition to treat the patient instead of the correct equipment. The price for this can be (again, this is a dummy patient) serious brain damage. The Internet of Things (IoT) unlocks huge potential for organisations, including healthcare entities. But this dependence on internet-connected infrastructure also poses a risk. Avoiding ransomware attacks in healthcare requires a multifaceted approach ... The Health Insurance Portability and Accountability Act (HIPAA) was an important step forward for healthcare security and organisations as well as patients.

Cybersecurity: Hacking victims are uncovering cyberattacks faster

"The buzz around the topic leading up to the GDPR deadline helped to get it in front of senior execs outside of the IT team. Many of them saw the importance of GDPR compliance and they supported measures to improve defences and breach identification," Grout said. While the legislation only applies to the European Union, the impact is also felt by global organisations that do business or transfer data in Europe. That appears to have had an impact on the median dwell time across the globe, which is down from 78 days to 56 days. However, one in ten FireEye investigations still involve organisations that had cyber attackers intruding on the network for over two years, indicating that cyber criminals -- and in some cases, nation-state backed hacking operations -- can still remain very stealthy when compromising networks. "Some of them are being targeted by highly skilled APT [Advanced Persistent Threat] groups that are able to hide themselves for a long time after the initial breach," said Grout. One of the most common weaknesses exploited by attackers -- as identified in the report -- is the failure to enforce multi-factor authentication (MFA) on the enterprise network. A lack of MFA means that cyber criminals who successfully breach or steal passwords can easily gain access to networks.

AI’s bias problem: Why Humanity Must be Returned to AI

If an AI system is built in a contrived laboratory environment with data that isn’t representative of the target audience, or worse, patterns in the data reflect prejudice, the AI’s decisions will also be prejudiced. It is incredibly difficult for algorithms to ‘unlearn’ these patterns, so it is important that biases are not built into the algorithm from the first phases of implementation. Origins of bias can be nuanced and hard to spot, ranging from historic impartialities based on race and gender, to a lack of diversity within training sets. As a consequence, certain groups are disproportionately represented. A study by the National Institute of Standards and Technology (NIST) found that facial recognition misidentified African-American and Asian faces ten to 100 times more than Caucasians, while Native Americans were misidentified more than any other group. The study also revealed that women were falsely identified over men, and senior citizens had more than 10 times the issues faced by middle-aged adults.  According to a report by AI Now Institute at New York University, the lack of diverse training data also threatens to worsen the historic underemployment of disabled people.

Achieving SOC 2 Compliance in DevOps

If you are wondering whether AWS complies with SOC 2 at this point, you are not alone. AWS as a cloud environment is designed to comply with SOC 2 requirements; at the very least, the ecosystem offers tools that make compliance easy. SOC 2 compliance is something that AWS takes seriously. In fact, AWS keeps the location of data centers confidential to ensure maximum security. It also offers high resilience with multiple redundancies and automated disaster recovery measures. Through AWS Artifact, you can gain access to all SOC reports, including SOC 2 Security, Availability, and Confidentiality Reports generated by AWS. All controls are provided and you have the complete services in scope list for maximum compliance. AWS has an extensive set of tools for maintaining controls and ensuring compliance. Amazon CloudWatch is a good example of a comprehensive monitoring tool that you can use across the AWS ecosystem. The same is true for AWS CloudTrail and Amazon GuardDuty. You also have AWS Shield offering security measures that are ready to deploy.

Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - February 18, 2020

Artificial Human Beings: The Amazing Examples Of Robotic Humanoids And Digital Humans

Artificial Human Beings: The Amazing Examples Of Robotic Humanoids And Digital Humans
Digital human beings are photorealistic digitized virtual versions of humans. Consider them avatars. While they don't necessarily have to be created in the likeness of a specific individual (they can be entirely unique), they do look and act like humans. Unlike digital assistants such as Alexa or Siri, these AI-powered virtual beings are designed to interact, sympathize, and have conversations just like a fellow human would. Here are a few digital human beings in development or at work today: Neons: AI-powered lifeforms created by Samsung’s STAR Labs and called Neons include unique personalities such as a banker, K-pop star, and yoga instructor. While the technology is still young, the company expects that, ultimately, Neons will be available on a subscription basis to provide services such as a customer service or concierge. Digital Pop Stars: In Japan, new pop stars are getting attention—and these pop stars are made of pixels. One of the band members of AKB48, Amy, is entirely digital and was made from borrowing features from the human artists in the group. Another Japanese artist, Hatsune Miku, is a virtual character from Crypton Future Media.

Edge computing enables near-real-time application engagements. While local computing is not new, edge computing has emerged because technologies, such as content delivery networks and local edge devices and gateways, can now aggregate IoT sensor and mobile device insights to enable on-demand actions where people and physical processes exist, need them, and benefit from them. Want to dramatically improve customer experience, employee experience, and business achievements? This is powerful empowerment. Edge computing architectures have three major building blocks. Edge computing varies across different solution use cases and value scenarios, so it's difficult to define just a single pattern for everyone. Forrester's research does find three general building blocks core to most scenarios: edge management layers, edge networks, and edge intelligence fabric software. Enterprise and government use cases and case studies of how your peers are empowering their customers and advancing their market value with these empowering technologies. Functions and components of edge computing and the vendor landscape across all industries and the services already offered.

It isn’t just the engineering team that should focus on developing the product offering or key consumer touchpoints. Employees across the organisation are valuable as they all interact with different stages of the customer journey, and can provide valuable insights into pain points. They are capable of delivering a constant flow of new ideas to improve the digital customer experience, asking what will help to add value for your customers while engineering teams actually integrate a process to make it a reality. It’s no longer about the waterfall approach of working in segments, but rather coming together as a collaborative business and empowering the devops team to make the technical decisions needed to make the ideas a reality.  Never underestimate the importance of collaboration in innovation. Giving employees at all levels the opportunity to get involved with their own ideas, perhaps via collaborative brainstorming sessions with the engineering team, can mean the risk of analysis paralysis will be averted, as everyone is involved from the beginning. It is essential for the management team to provide employees with not only the opportunity to share their thoughts about ways to develop the business, but the training to help them use their data and technology to bring these ideas to life.

Bala is right to call out that one of the primary benefits of a serverless and "single-purpose microservices" is that "You can use the right tool for the right job rather than being constrained to one language, one framework or even one database." This is immensely freeing for developers, because now instead of writing monolithic applications that likely have very low utilization with spiky workloads, they can build microservices tied to ephemeral serverless functions. When the system is idle, it shuts down and costs nothing to run. Everyone wins. This also can make maintaining code more straightforward. For monolithic applications, updating code can present a major burden because of the difficulty inherent in covering all dependencies. As Ophir Gross has noted, "Spaghetti code is full of checks to see what interface version is being used and to make sure that the right code is executed. It's often disorganized and usually results in higher maintenance efforts as changes in code affect functionality in areas that are challenging to predict during development stages."

DDoS Attacks Nearly Double Between Q4 2018 and Q4 2019

DDoS attackers continued to leverage non-standard protocols for amplification attacks in the last quarter of 2019, researchers found. Adversaries have also adopted Apple Remote Management Service (ARMS), part of the Apple Remote Desktop (ARD) application for remote administration. This tactic was first spotted in June 2019; by October, attacks were widespread. The fourth quarter of 2019 brought multiple high-profile DDoS attacks, including threats against financial organizations in South Africa, Singapore, and nations across Scandinavia. DDoS attacks aimed to cause disruption for the United Kingdom's Labour party and also targeted Minecraft servers at the Vatican. In a more recent case, just last week the FBI warned of a potential DDoS attack targeting a state-level voter registration and information site. "This demonstrates that DDoS is still a common attack method among cybercriminals driven by ideological motives or seeking financial gain, and organizations should be prepared for such attacks and have a deep understanding of how they evolve," researchers said in a statement.

Keeping up with disruptors through hybrid integration

For consumers of the digital era, experience is everything. They expect newfound convenience and flexibility and will have no problem looking elsewhere if this cannot be provided. This begs the question: how can the traditional players hope to keep up if this is the case? However, things aren’t as complex as they seem. One reason these new companies can drive such positive results comes down to the fact there is no reliance on legacy databases, and they can take advantage of existing third-party systems. For example, Citymapper leverages open data from the Transport of London to retrieve journey information and provide real-time visibility over transport schedules, allowing customers to make the best choice of journey based on timings. Meanwhile, Uber uses Google’s APIs to run their mapping software and match customers with the drivers closest to them. From there, the data is stored and used to predict supply and demand, as well as set fares.  In both cases, these services have been built on existing integrations, meaning they don’t run into the same problems as many of the established players.

What Does Facebook's Quite AI Acquisitions Across UK Signify?

Amid all the controversies and roadblocks in its strive to attain AI leadership, the company is moving forward with innovation and tech developments. These developments are a major result of its acquisitions; small but significant. Facebook’s M&A activities are proving to be quite beneficial in its AI journey. Recently, the company acquired Scape Technologies which is a London-based computer vision startup working on location accuracy beyond the capabilities of GPS. Full terms of the deal remain as yet unknown, although a Companies House update reveals that Facebook Inc. now has majority control of the company (more than 75%). Further, a regulatory filings show that Scape’s previous venture capital representatives have resigned from the Scape board and are replaced by two Facebook executives. ... Meanwhile, the acquisition by Facebook, no matter what form it takes, looks like a good fit given the US company’s investment in next-generation platforms, including VR and AR. It is also another — perhaps, worrying — example of US tech companies hoovering up UK machine learning and AI talent early.

Why AI systems should be recognized as inventors

It’s important to note that the Artificial Inventor Project doesn’t want AI systems to own the patents for their creations. Such an interpretation of the case confuses ownership of patent rights with inventorship. Hence the DABUS applications list the AI as the inventor, with the AI’s owner listed as the patent applicant and prospective owner of any issued patents. It will be many years before they learn the full outcome of their applications. The team is appealing the rulings of both the EPO and the UK IPO. Other decisions in jurisdictions including the US, Germany, Israel, Taiwan, China, Korea are still pending, as well as one filed under the Patent Cooperation Treaty, which facilities the patent application process in more than 150 states. The World Intellectual Property Organization and the United States Patent and Trademark Office, meanwhile, have both requested comments on how they could develop policies for such applications. They may need to address any ambiguity over who owns the patents for AI-generated inventions when both the creator of the system and an individual user have contributed to its output. But granting ownership to the person who made the AI operable may be the most straightforward solution.

Mac attacks on the rise

"We saw a significant rise in the overall prevalence of Mac threats in 2019, with an increase of over 400% from 2018,'' the report by Malwarebytes Labs stated. Part of that increase can be attributed to an increase in its Malwarebytes for Mac user base, the report noted. To see if that increase reflected what was actually happening in the Mac threat landscape, Malwarebytes said, it examined threats per endpoint on both Macs and Windows PCs. "In 2019, we detected an average of 11 threats per Mac endpoint--nearly double the average of 5.8 threats per endpoint on Windows,'' the report said. Another key finding was that overall, consumer threat detections were down by 2% from 2018, but business detections increased by 13% in 2019, the report said. This resulted in a mere 1% increase in threat volume year-over-year. The sophistication of threat capabilities in 2019 increased, with many using exploits, credential stealing tools, and multi-stage attacks involving mass infections of a target, the report said. While seven of 10 top consumer threat categories decreased in volume, HackTools--a threat category for tools used to hack into systems and computers--increased against consumers by 42% year-over-year, bolstered by families such as MimiKatz, which also targeted businesses, the report said.

4 principles of analytics you cannot ignore

Data are a resource. If you are not analyzing it, it is an unused resource. At SAS, we often say, “Data without analytics is value not yet realized.” Naturally, then, wherever there is data, there needs to be analytics. But what does that mean today when we are generating more data and more diverse data than ever before? And all of that data streams or moves about many different networks. The first principle of analytics is about bringing the right analytics technology to the right place at the right time. Whether your data are on-premises, in a public or private cloud, or at the edges of the network – analytics needs to be there with it. ... You should pay great attention to the quality, robustness and performance of your algorithms. But the value of analytics is not in the features and functions of the algorithm – not anymore. The value is in solving data-driven business problems. The analytics platform is a commodity – everybody has algorithms. But operationalizing analytics is not a commodity. Everybody is challenged with bringing analytics to life. When you deploy analytics in production, it drives value and decisions.

Quote for the day:

"To be able to lead others, a man must be willing to go forward alone." -- Harry Truman

Daily Tech Digest - February 16, 2020

Is Your Cybersecurity Workforce Ready To Win Against Cybercriminals?

A trained staff is a critical business asset when it comes to handling information security projects. Whether your company is involved in a simple privileged access management (PAM) project or implementing a complex continuous adaptive risk and trust assessment (CARTA-based) strategy design, success depends on employee competency. Now that you have a training plan, implement it by assigning specific information security training certifications or training modules to each employee and measure the effectiveness and quality of execution against your business goals. ... The ultimate goal is to foster a cybersecurity culture across the organization. This is a tough task because it involves the human aspects of cybersecurity. Be prepared for resistance, and plan efforts to address employee concerns in an understanding and open manner. Empathy will get you to your goals faster than issuing strict directives and hoping employees will follow. Make cybersecurity practices a routine part of your business processes as well as strategic concerns. This 360-degree approach will become your best defense against information security risks.

For enterprise developers attempting to meet the highly specialized needs of a vertical and tech-savvy users' expectations, low-code platforms are a way to handle the scalability, data management, architecture and security concerns that hold back internal bespoke software projects. To be worth the money, a low-code platform must be flexible enough to build almost any app securely, even if it's only for internal users, said AbbVie's Cattapan. Low-code examples at the company range from a shipment management app to track chemicals around its labs and manufacturing campus, to a reporting app related to drug approval rules in more than 200 countries. To work for these purposes, a low-code platform has to scale in diverse situations: "We might have a really large dataset ... and we want the app server next to the data, but we also want the option to have it up in the cloud," Cattapan said. 

4 in Chinese Army Charged With Breaching Equifax
While many of the security issues at Equifax in 2017 have been discussed in lawsuits, investigations and news media reports, the new indictments offer some additional details of what happened staring in May of that year. After exploiting the vulnerability in Apache Struts, the hackers allegedly gained access to Equifax's online dispute portal in order to gain a foothold within the corporate network and steal more credentials, according to the indictment. After that, the four hackers spent several weeks mapping the network and running queries to understand what databases they could access and which ones held the personal data and intellectual property they were seeking, the indictment says. The hackers ran about 9,000 queries within the network over the course of several months, it adds. "Once they accessed files of interest, the conspirators then stored the stolen information in temporary output files, compressed and divided the files, and ultimately were able to download and exfiltrate the data from Equifax's network to computers outside the United States," prosecutors say.

How Edge Computing Is Supercharging the Internet of Things
Though many may imagine servers as rows of tall, boxy machines, in recent years servers have gone mobile, enabling edge computing on the road. Vehicle servers are a boon to law enforcement officers, who can avoid spending precious time on tasks such as manually keying in a license plate number to check suspicious vehicles. Police cruisers equipped with servers such as NEXCOM's MVS series of vehicle servers powered by Intel® Core and Intel Atom processors can quickly decode images of cars taken by a cruiser's rooftop camera, identify license plates, and determine whether they're listed in a database of vehicles of interest to law enforcement. ... Machines can see what humans miss. Imagine a failing motor on a factory floor begins to vibrate more quickly. That initial, negligible acceleration won't be noticeable to workers. But an electronic vibration sensor detects it, triggering analysis by predictive maintenance software. The software notifies personnel, who address the problem before it leads to a costly equipment breakdown. Edge computing helps manufacturers make the most efficient use of predictive maintenance technology.

IBM highlights new approach to infuse knowledge into NLP models

There have been two schools of thought or "camps" since the beginning of AI: one has focused on the use of neural networks/deep learning, which have been very effective and successful in the past several years, said David Cox, director for the MIT-IBM AI Watson Lab. Neural networks and deep learning need data and additional compute power to thrive. The advent of the digitization of data has driven what Cox called "the neural networks/deep learning revolution." Symbolic AI is the other camp and it takes the point of view that there are things you know about the world around you based on reason, he said. However, "all the excitement in the last six years about AI has been about deep learning and neural networks,'' Cox said. Now, "there's a grouping idea that just as neural networks needed something like data and compute for a resurgence, symbolic AI needed something,'' and the researchers theorized that maybe what it needs is neural networks, he said. There was a sense among researchers that the two camps could complement each other and capitalize on their respective strengths and weaknesses in a productive way, Cox said.

The Kongo Problem: Building a Scalable IoT Application with Apache Kafka

Kafka is a distributed stream processing system which enables distributed producers to send messages to distributed consumers via a Kafka cluster. Simply put, it’s a way of delivering messages where you want them to go. Kafka is particularly advantageous because it offers high throughput and low latency, powerful horizontal scalability, and the high reliability necessary in production environments. It also enables zero data loss, and brings the advantages of being open source and a well-supported Apache project. At the same time, Kafka allows the use of heterogeneous data sources and sinks – a key feature for IoT applications that can leverage Kafka to combine heterogeneous sources into a single system. In order to achieve high throughput, low latency and horizontal scalability Kafka was designed as a "dumb" broker and a "smart" consumer. This results in different trade-offs in functionality and performance compared to other messaging technologies such as RabbitMQ and Pulsar

Deep Instinct nabs $43M for a deep-learning cybersecurity solution that can suss an attack before it happens

GettyImages 1079200304
“Deep Instinct is the first and currently the only company to apply end-to-end deep learning to cybersecurity,” he said in an interview. In his view, this provides a more advanced form of threat protection than the common traditional machine learning solutions available in the market, which rely on feature extractions determined by humans, which means they are limited by the knowledge and experience of the security expert, and can only analyze a very small part of the available data (less than 2%, he says). “Therefore, traditional machine learning-based solutions and other forms of AI have low detection rates of new, unseen malware and generate high false-positive rates.” There’s been a growing body of research that supports this idea, although we’ve not seen many deep learning cybersecurity solutions emerge as a result (not yet, anyway). He adds that deep learning is the only AI-based autonomous system that can “learn from any raw data, as it’s not limited by an expert’s technological knowledge.” In other words, it’s not based just on what a human inputs into the algorithm, but is based on huge swathes of big data, sourced from servers, mobile devices and other endpoints, that are input in and automatically read by the system.

What Differentiates AI Leaders, According To A Founder Of Globant

Given that AI is so laden with ambiguity, companies often lack clarity in terms of determining what AI can do for them and how they can build roadmaps that will empower them to most effectively implement the technology. What’s more, half of the organizations don’t have a clear definition of how employees and AI will most productively work together. In order to succeed, organizations must work to define the role of AI in their workplace and the ideal relationship between AI and employees. Armed with this knowledge, organizations will be primed to adopt the most appropriate AI solution for their business and customer needs. Recognizing that companies face an uphill battle to understand how AI can help them realize their organizational objectives, Globant has embraced a unique organizational structure called “Agile Pods.” Pods are multidimensional teams comprised of members from Globant’s various Studios that serve as customer-facing service delivery teams and help ensure that its solutions are built and implemented with a customer-first mindset.

Rethinking change control in software engineering

Programmers that make mistakes with their conditional feature flags could accidentally deploy a change to production when it is supposed to stay dark, which means they might not be able to roll it back -- not easily, at least. The key to using feature flags is to place them where they make sense and to diligently make smart decisions regarding the risk they create. A key issue in change control in software engineering is figuring out who change control affects and how it affects them. If nearly everyone is affected by a change -- a likely scenario for teams contributing to a single mobile app deployment -- there tends to be heavy regression testing, triage meetings, go/no go meetings and documentation. This bureaucratic process often adds cost and delays, and it can be difficult to see where exactly the process provides value. One way to cut away barrier-inducing change control processes is to isolate the impact of changes.

5 biggest mistakes developers can make in a job interview

Successfully passed job interview
Interviews can be nerve-racking, but developers must avoid letting that apprehension take over their thought processes, said Tom├ís Pueyo, vice president of growth at Course Hero. "The biggest mistake I see when interviewing tech candidates is jumping to solutions before understanding the problem," Pueyo said. "Candidates are eager to answer questions, so they believe the faster they come up with a solution, the cleverer they will sound. But this is not what our job is about." "In tech, we deal with massive amounts of data, solving problems that are frequently unclear. A key marker of wisdom is taking a step back, gathering all the available information, understanding it, and only then jumping to solutions," Pueyo added. While interviews do focus on questioning the interviewee, the candidate should also have their own questions prepared, Hill said. "As a hiring manager, I expect the candidate to come with their own questions. That's how I know that they're enthusiastic about the company, and that they're eager to learn and improve," Hill noted.

Quote for the day:

"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold