Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-4-ways-to-prepare-for-negotiating-your-first-offer-as-a-developer
Guest Contributor
14 Jun 2019
7 min read
Save for later

4 ways to prepare for negotiating your first offer as a developer

Guest Contributor
14 Jun 2019
7 min read
The future job outlook for developers is promising, at 24% projected growth between 2016 and 2026. Developers are becoming more and more in demand and their salaries worldwide are increasing, seeing a 2018 US median of $50 per hour. If you're a recent grad looking for work, negotiation classes can prime you to prepare for agreeing your first offer with these four tips. Understand your skill value What developer skills do you possess that are relevant to the post you're applying for? Your skills are a mix of your developer training (technical skills) and your people skills (soft skills). Base your negotiation on the measurable professional value you can provide to your organization. For instance, if you have some experience in sales, then prepare to demonstrate how this can add value to the business by better understanding your stakeholders’ needs, and perhaps promote what your department can deliver in non-technical language. If you’ve previously attended contract negotiation classes, you may offer additional value to the developer team at your new company. Understand what skills, qualities, and attitudes your employer values and position yourself accordingly. Research and prepare It’s important to thoroughly research what you can expect from your industry, company, and role. Having the knowledge that comes with extensive research allows you to be prepared when exploring your options and when deciding to agree on your first offer. Industry standards What's the median salary for entry-level developers in your area? What kind of perks can you expect? Familiarize yourself with the industry standards before you walk into your interview. Some great places to begin your research include: payscale.com indeed.com glassdoor.com Salary.com Company culture Another aspect you should consider is the company culture. To determine if a company’s culture is the right fit for you, consider: What working environments do I work best in? Do the company’s values align with my own? How are the interpersonal interactions between staff and management? What does my career trajectory look like, and can I achieve this with the company? Perks and benefits Research the perks that developers in your area of expertise and with your level of skill may expect. For example: Do developers get ongoing training? Are other employers offering equity in the company? This is highly relevant to tech startups. How many paid sick days and vacation days are you eligible for? Is there usually a probationary period for entry-level developers in your area? If so, you should be aware of the length of probation and perhaps negotiate the criteria against which your performance will be judged. What kind of medical and dental insurance can you expect? Do you get better perks and pay if you're a member of a professional association? Can you work from home? Are phone and computer included? Set your expectations right With your well-researched knowledge of industry standards, set your salary expectations ahead of the salary negotiation. Prior research can help you have a number in mind and a relevant contract negotiations course can help you use that number as your negotiation anchor. Your negotiation anchor acts as your reference point whenever you have salary discussions. If the offer is too low, you can decide to walk away and keep looking for better opportunities. If the offer is close to your anchor, use your negotiation training to improve your rate. If the company makes an offer that’s much higher than your salary expectations, give pause to understand the reason. Might you have undervalued yourself? Is the company expecting longer hours or more deliverables than its competitors? Does the company offer far fewer benefits, so that everything is built into their higher rate? While some recruiters may ask about your salary expectations during the initial screening process, some may hold off salary discussions until you've met face to face. Additionally, some recruiters will ask directly what you expect salary-wise, or they might ask you to respond to a salary range offer. Whichever method your recruiter or interviewer chooses, it’s best to either not give any number or wait until the interview. Alternatively, if you prefer to share a number, ensure you don’t allow your current or previous position to limit your salary aspirations. Do not share your current or previous salary unless this bolsters your aspirations. When you have a set salary expectation, it shows that: You know your worth. You are clear about the value you bring. You are confident. You are aware of the company’s pay scales for developers at entry level, mid-level, and high proficiency level. Know your strengths to match the required job role Expert negotiation classes equip trained developers to get to know the threats and opportunities their employers may be facing. You can leverage the information gathered to work out the best value exchange that can result in a win-win contract between you and your employer. Try and find out: Why does the company need a new developer? If the underlying need isn’t shared with you early on in your process, then ask. What is the company's budget? While you may not be able to get this number out of your boss, it’s worth asking. You will be surprised how often a company will divulge their ceiling. What is your prospective boss’ strategic goals for the year? Bosses love a team who do their best to make their boss look good. Where are the developer pain points faced by the company? Virtually every role is filled to address a problem or pain point. What are the competitors’ goals for the next year? This places you head and shoulders ahead of other developers who don’t think expansively enough. How much time and how many resources have the prospective employer invested in recruiting and retaining a developer of your skill level? If they have invested a lot of resources and time, then their motivation will likely be high to choose, and so they should be more prepared to be flexible in negotiating your remuneration package. It may be more difficult getting a high salary from a cash-strapped company. However, there's still room for negotiation if your developer skill set is going to result in an increase in the company's revenue or significantly reduce expenditure or risk. For instance, if you develop software that automates customer acquisition and increases marketing Return On Investment (ROI), then you can justify asking for a higher salary or other benefits.   Final thoughts Most developers find salary negotiations to be uncomfortable and awkward. Especially after getting a job offer after a protracted search in a competitive field. As a developer, don't let fear and uncertainty deter you from negotiating for the salary you deserve. You’re more likely to achieve a sizable increase in salary and benefits at your first negotiation than subsequent reviews once in the role. Research industry standards to set justifiable expectations. Know your worth and strategically use leverage to create a win-win relationship between you and your prospective employer. Author Bio: James Tighe is a long-time content creator and editor. Through his writings, James brings the best and most important lessons from negotiation classes in NYC to a business audience. He also enjoys the opportunity to work with skilled negotiators, integrating best practices into his own life.   Containers and Python are in demand, but Blockchain is all hype, says Skill Up developer survey Does it make sense to talk about DevOps engineers or DevOps tools? Polyglot programming allows developers to choose the right language to solve tough engineering problems
Read more
  • 0
  • 0
  • 2285

article-image-austrian-supreme-court-rejects-facebooks-bid-to-stop-a-gdpr-violation-lawsuit-against-it-by-privacy-activist-max-schrems
Bhagyashree R
13 Jun 2019
5 min read
Save for later

Austrian Supreme Court rejects Facebook’s bid to stop a GDPR-violation lawsuit against it by privacy activist, Max Schrems

Bhagyashree R
13 Jun 2019
5 min read
On Tuesday, the Austrian Supreme Court overturned Facebook’s appeal to block a lawsuit against it for not conforming to Europe’s General Data Protection Regulation (GDPR). This decision will also have an effect on other EU member states that give “special status to industry sectors.” https://twitter.com/maxschrems/status/1138703007594496000?s=19 The lawsuit was filed by Austrian lawyer and data privacy activist, Max Schrems. In the lawsuit, he has accused Facebook of using illegal privacy policies as it forces users to give their consent for processing their data in return for using the service. GDPR does not allow forced consent as a valid legal basis for processing user data. Schrems said in a statement, “Facebook has even blocked accounts of users who have not given consent. In the end users only had the choice to delete the account or hit the ‘agree’ button–that’s not a free choice; it more reminds of a North Korean election process. Many users do not know yet that this annoying way of pushing people to consent is actually forbidden under GDPR in most cases.” Facebook has been trying to block this lawsuit by questioning whether GDPR-based cases fall under the jurisdiction of courts. According to Facebook’s appeal, these lawsuits should be handled by data protection authorities, Irish Data Protection Commissioner (DPC) in this case. Dismissing Facebook’s argument, this landmark decision says that any complaints made under Article 79 of GDPR can be reviewed both by judges and data protection authorities. This verdict comes as a sigh of relief for Schrems, who has to wait for almost 5 years to even get this lawsuit to trial because of Facebook's continuous blockade attempts. “I am very pleased that we were able to clarify this fundamental issue. We are hoping for a speedy procedure now that the case has been pending for a good 5 years," Schrems said in a press release. He further added, “If we win even part of the case, Facebook would have to adapt its business model considerably. We are very confident that we will succeed on the substance too now. Of course, they wanted to prevent such a case by all means and blocked it for five years.“ Previously, the Vienna Regional Court did give the verdict in Facebook’s favor declaring that it did not have jurisdiction and Facebook could only be sued in Ireland, where its European headquarters are. Schrems believes that this verdict was given because there is “a tendency that civil judges are not keen to have (complex) GDPR cases on their table.” Now, both the Appellate Court and the Austrian Supreme Court have agreed that everyone can file a lawsuit for GDPR violations. Schrems original idea was to make a “class action” style suit against Facebook by allowing any Facebook user to join the case. But, the court did not allow that, and Schemes' was limited to bring only a model case to the court. This is Schrems’ second victory this year in the fight against Facebook. Last month, the Irish Supreme court dismissed Facebook from stopping the referral of privacy case regarding the transfer of EU citizens’ data to the United States. The hearing of this case is now scheduled to happen at the European Court of Justice (ECJ) in July. Schrems’ eight-year-long battle against Facebook Schrems’ fight against Facebook started way before we all realized the severity of tech companies harvesting our personal data. Back in 2011, Shcrems’ professor at Santa Clara University invited Facebook’s privacy lawyer Ed Palmieri to speak to his class. Schrems was surprised to see the lawyer's lack of awareness regarding data protection laws in Europe. He then decided to write his thesis paper about Facebook’s misunderstanding of EU privacy laws. As a part of the research, he requested his personal data from Facebook and found it had his entire user history. He went on to make 22 complaints to the Irish Data Protection Commission, in which he accused Facebook of breaking European data protection laws. His efforts finally showed results, when in 2015 the European Court of Justice took down the EU–US Safe Harbor Principles. As a part of his fight for global privacy rights, Schrems also co-founded the European non-profit noyb (None of Your Business), which aims to "make privacy real”. The organization aims to introduce ways to execute privacy enforcement more effectively. It holds companies accountable who fail to follow Europe's privacy laws and also takes media initiatives to support GDPR. Looks like things hasn’t been going well for Facebook. Along with losing these cases in the EU, in a revelation yesterday by the WSJ, several emails were found that indicate Mark Zuckerberg’s knowledge of potentially problematic privacy practices at the company. You can read the entire press release on NOYB’s official website. Facebook releases Pythia, a deep learning framework for vision and language multimodal research Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny  
Read more
  • 0
  • 0
  • 2992

article-image-getting-started-with-z-garbage-collectorzgc-in-java-11-tutorial
Vincy Davis
13 Jun 2019
9 min read
Save for later

Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial]

Vincy Davis
13 Jun 2019
9 min read
Java 11 includes a lot of improvements and changes in the GC(Garbage Collection) domain. Z Garbage Collector (ZGC) is scalable, with low latency. It is a completely new GC, written from scratch. It can work with heap memory, ranging from KBs to a large TB memory. As a concurrent garbage collector, ZGC promises not to exceed application latency by 10 milliseconds, even for bigger heap sizes. It is also easy to tune. It was released with Java 11 as an experimental GC. Work is in progress on this GC in OpenJDK and more changes can be expected over time. This article is an excerpt taken from the book, Java 11 and 12 - New Features, written by Mala Gupta. In this book, you will learn the latest developments in Java, right from variable type inference and simplified multithreading through to performance improvements, and much more. In this article, you will understand the need of ZGC, its features, its working, ZGC heap, ZGC phases, and colored pointers. Need for Z Garbage Collector One of the features that resulted in the rise of Java in the early days was its automatic memory management with its GCs, which freed developers from manual memory management and lowered memory leaks. However, with unpredictable timings and durations, garbage collection can (at times) do more harm to an application than good. Increased latency directly affects the throughput and performance of an application. With ever-decreasing hardware costs and programs engineered to use largish memories, applications are demanding lower latency and higher throughput from garbage collectors. ZGC promises a latency of no more than 10 milliseconds, which doesn't increase with heap size or a live set. This is because its stop-the-world pauses are limited to root scanning. Features of Z Garbage Collector ZGC brings in a lot of features, which have been instrumental in its proposal, design, and implementation. One of the most outstanding features of ZGC is that it is a concurrent GC. Other features include: It can mark memory and copy and relocate it, all concurrently. It also has a concurrent reference processor. As opposed to the store barriers that are used by another HotSpot GCs, ZGC uses load barriers. The load barriers are used to keep track of heap usage. One of the intriguing features of ZGC is the usage of load barriers with colored pointers. This is what enables ZGC to perform concurrent operations when Java threads are running, such as object relocation or relocation set selection. ZGC is more flexible in configuring its size and scheme. Compared to G1, ZGC has better ways to deal with very large object allocations. ZGC is a single-generation GC. It also supports partial compaction. ZGC is also highly performant when it comes to reclaiming memory and reallocating it. ZGC is NUMA-aware, which essentially means that it has a NUMA-aware memory allocator. Getting started with Z Garbage Collector Working with ZGC involves multiple steps. The JDK binary should be installed, which is specific to Linux/x64, and build and start it. The following commands can be used to download ZGC and build it on your system: $ hg clone http://hg.openjdk.java.net/jdk/jdk $ cd zgc $ sh configure --with-jvm-features=zgc $ make images After execution of the preceding commands, the JDK root directory can be found in the following location: g./build/linux-x86_64-normal-server-release/images/jdk Java tools, such as java, javac, and others can be found in the /bin subdirectory of the preceding path (its usual location). Let's create a basic HelloZGC class, as follows: class HelloZGC { public static void main(String[] args) { System.out.println("Say hello to new low pause GC - ZGC!"); } } The following command can be used to enable ZGC and use it: java -XX:+UnlockExperimentalVMOptions -XX:+UseZGC HelloZGC Since ZGC is an experimental GC, the user needs to unlock it using the runtime option, that is, XX:+UnlockExperimentalVMOptions. For enabling basic GC logging, the user can add the -Xlog:gc option. Detailed logging is helpful while fine-tuning an application. The user can enable it by using the -Xlog:gc* option  as follows: java -XX:+UnlockExperimentalVMOptions -XX:+UseZGC -Xlog:gc* HelloZGC The previous command will output all the logs to the console, which could make it difficult to search for specific content. The user can specify the logs to be written to a file as follows: java -XX:+UnlockExperimentalVMOptions -XX:+UseZGC -Xlog:gc:mylog.log* HelloZGC Z Garbage Collector heap ZGC divides memory into regions, also called ZPages. ZPages can be dynamically created and destroyed. These can also be dynamically sized (unlike the G1 GC), which are multiples of 2 MB. Here are the size groups of heap regions: Small (2 MB) Medium (32 MB) Large (N * 2 MB) ZGC heap can have multiple occurrences of these heap regions. The medium and large regions are allocated contiguously, as shown in the following diagram: Unlike other GCs, the physical heap regions of ZGC can map into a bigger heap address space (which can include virtual memory). This can be crucial to combat memory fragmentation issues. Imagine that the user can allocate a really big object in memory, but can't do so due to unavailability of contiguous space in memory. This often leads to multiple GC cycles to free up enough contiguous space. If none are available, even after (multiple) GC cycle(s), the JVM will shut down with OutOfMemoryError. However, this particular use case is not an issue with the ZGC. Since the physical memory maps to a bigger address space, locating a bigger contiguous space is feasible. Z Garbage Collector phases A GC cycle of ZGC includes multiple phases: Pause Mark Start Pause Mark End Pause Relocate Start In the first phase, Pause Mark Start, ZGC marks objects that have been pointed to by roots. This includes walking through the live set of objects, and then finding and marking them. This is by far one of the most heavy-duty workloads in the ZGC GC cycle. Once this completes, the next cycle is Pause Mark Start, which is used for synchronization and starts with a short pause of 1 ms. In this second phase, ZGC starts with reference processing and moves to week-root cleaning. It also includes the relocation set selection. ZGC marks the regions it wants to compact. The next step, Pause Relocate Start, triggers the actual region compaction. It begins with root scanning pointing into the location set, followed by the concurrent reallocation of objects in the relocation set. The first phase, that is, Pause Mark Start, also includes remapping the live data. Since marking and remap of live data is the most heavy-duty GC operation, it isn't executed as a separate one. Remap starts after Pause Relocate Start but overlaps with the Pause Mark Start phase of the next GC cycle. Colored pointers Colored pointers are one of the core concepts of ZGC. It enables ZGC to find, mark, locate, and remap the objects. It doesn't support x32 platforms. Implementation of colored points needs virtual address masking, which could be accomplished either in the hardware, operating system, or software. The following diagram shows the 64-bit pointer layout: As shown in the preceding diagram, the 64-bit object reference is divided as follows: 18 bits: Unused bits 1-bit: Finalizable 1-bit: Remapped 1-bit: Marked1 1-bit: Marked0 42 bits: Object Address The first 18 bits are reserved for future use. The 42 bits can address up to 4 TB of address space. Now comes the remaining, intriguing, 4 bits. The Marked1 and Marked0 bits are used to mark objects for garbage collection. By setting the single bit for Remapped, an object can be marked not pointing to into the relocation set. The last 1-bit for finalizing relates to concurrent reference processing. It marks that an object can only be reachable through a finalizer. When the user runs ZGC on a system, it will be notice that it uses a lot of virtual memory space, which is not the same as the physical memory space. This is due to heap multi-mapping. It specifies how the objects with the colored pointers are stored in the virtual memory. As an example, for a colorless pointer, say, 0x0000000011111111, its colored pointers would be 0x0000100011111111 (remapped bit set), 0x0000080011111111 (Marked1 bit set), and 0x0000040011111111 (Marked0 bit set). The same physical heap memory would map to three different locations in address space, each corresponding to the colored pointer. This would be implemented differently when the mapping is handled differently. Tuning Z Garbage Collector To get the optimal performance,  a heap size must be set up, that can not only store the live set of your application but also has enough space to service the allocations. ZGC is a concurrent garbage collector. By setting the amount of CPU time that should be assigned to ZGC threads, the user can control how often the GC kicks in. It can be done so by using the following option: -XX:ConcGCThreads=<number> A higher value for the ConcGCThreads option will leave less amount of CPU time for your application. On the other hand, a lower value may result in your application struggling for memory; your application might generate more garbage than what is collected by ZGC. ZGC can also use default values for ConcGCThreads. To fine-tune your application on this parameter, you might prefer to execute against test values. For advanced ZGC tuning, the user can also enable large pages for enhanced performance of your application. It can be done by using the following option: -XX:+UseLargePages Instead of enabling large pages, the user can also enable transparent huge pages by using the following option: -XX:+UseTransparentHugePage The preceding option also includes additional settings and configurations, which can be accessed by using ZGC's official wiki page. ZGC is a NUMA-aware GC. Applications executing on the NUMA machine can result in a noticeable performance gain. By default, NUMA support is enabled for ZGC. However, if the JVM realizes that it is bound to a subset in the JVM, this feature can be disabled. To override a JVM's decision, the following option can be used: -XX:+UseNUMA Summary We have briefly discussed the scalable, low latency GC for OpenJDK—ZGC. It is an experimental GC, which has been written from scratch. As a concurrent GC, it promises max latency to be less than 10 milliseconds, which doesn't increase with heap size or live data. At present, it only works with Linux/x64. More platforms can be supported in the future if there is considerable demand for it. To know more about the applicability of Java's new features, head over to the book, Java 11 and 12 – New Features. Using lambda expressions in Java 11 [Tutorial] Creating a simple modular application in Java 11 [Tutorial] Java 11 is here with TLS 1.3, Unicode 11, and more updates
Read more
  • 0
  • 0
  • 24835

article-image-highlights-from-mary-meekers-2019-internet-trends-report
Sugandha Lahoti
12 Jun 2019
8 min read
Save for later

Highlights from Mary Meeker’s 2019 Internet trends report

Sugandha Lahoti
12 Jun 2019
8 min read
At Recode by Vox’s 2019 Code Conference on Tuesday, Bond partner Mary Meeker made her presentation onstage, covering everything on the internet's latest trends. Meeker had first started presenting these reports in 1995, underlining the most important statistics and technology trends on the internet. Last year in September, Meeker quit Kleiner Perkins to start her own firm Bond and is popularly known as the Queen of the Internet. Mary Meeker’s 2019 Internet trends report highlighted that the internet is continuing to grow, slowly, as more users come online, especially with mobile devices. She also talked about increased internet ad spending, data growth, as well as the rise of freemium subscription business models, interactive gaming, the on-demand economy and more. https://youtu.be/G_dwZB5h56E The internet trends highlighted by Meeker include: Internet Users E-commerce and advertising Internet Usage Freemium business models Data growth Jobs and Work Online Education Immigration and Healthcare Internet Users More than 50% of the world’s population now has access to the internet. There are 3.8 billion internet users in the world with Asia-pacific leading in both users and potential. China is the largest market with 21% of total internet users and India is at 12%. However, the growth is slowing by 6% in 2018 versus 7% in 2017 because so many people have come online that new users are harder to come by. New smartphone unit shipments actually declined in 2018. Per the global internet market cap leaders, the U.S. is stable at 18 of the top 30 and China is stable at 7 of the top 30. These are the two leading countries where internet innovation is at an especially high level. If we look at revenue growth for the internet market cap leaders it continues to slow - 11 percent year-on-year in Q1 versus 13 percent in Q4. Internet usage Internet usage had a solid growth, driven by investment in innovation. The digital media usage in the U.S. is accelerating up 7% versus 5% growth in 2017. The average US adult spends 6.3 hours each day with digital media, over half of which is spent on their mobiles. Wearables had 52 million users which doubled in four years. Roughly 70 million people globally listen to podcasts in the US, a figure that’s doubled in about four years. Outside the US, there's especially high innovation in data-driven and direct fulfillment that's growing very rapidly in China. Innovation outside the US is also especially strong in financial services. Images are also becoming an increasingly relevant way to communicate. More than 50% of the tweets of impressions today are images, video or other forms of media. Interactive gaming innovation is rising across platforms as interactive games like Fortnite become the new social media for certain people. It is accelerating with 2.4 billion users up, 6 percent year-on-year in 2018. On the flip side Almost 26% of adults are constantly online versus 21% three years ago. That number jumped to 39% for 18 to 29 year-olds surveyed. However, digital media users are taking action to reduce their usage and businesses are also taking actions to help users monitor their usage. Social media usage has decelerated up 1% in 2018 versus 6% in 2017. Privacy concerns are high but they're moderating. Regulators and businesses are improving consumer privacy control. In digital media encrypted messaging and traffic are rising rapidly. In Q1, 87 percent of global web traffic was encrypted, up from 53 percent three years ago. Another usage concern is problematic content. Problematic content on the Internet can be less filtered and more amplified. Images and streaming can be more powerful than text. Algorithms can amplify users on patterns  and social media can amplify trending topics. Bad actors can amplify ideologies, unintended bad actors can amplify misinformation and extreme views can amplify polarization. However internet platforms are indeed driving efforts to reduce problematic content as do consumers and businesses. 88% percent of people in the U.S. believe the Internet has been mostly good for them and 70% believe the Internet has been mostly good for society. Cyber attacks have continued to rise. These include state-sponsored attacks, large-scale data provider attacks, and monetary extortion attacks. E-commerce and online advertising E-commerce is now 15 percent of retail sales. Its growth has slowed — up 12.4 percent in Q1 compared with a year earlier — but still towers over growth in regular retail, which was just 2 percent in Q1. In online advertising, on comparing the amount of media time spent versus the amount of advertising dollars spent, mobile hit equilibrium in 2018 while desktop hit that equilibrium point in 2015. The Internet ads spending on an annual basis accelerated a little bit in 2018 up 22 percent.  Most of the spending is still on Google and Facebook, but companies like Amazon and Twitter are getting a growing share. Some 62 percent of all digital display ad buying is for programmatic ads, which will continue to grow. According to the leading tech companies the internet average revenue has been decelerating on a quarterly basis of 20 percent in Q1. Google and Facebook still account for the majority of online ad revenue, but the growth of US advertising platforms like Amazon, Twitter, Snapchat, and Pinterest is outstripping the big players: Google’s ad revenue grew 1.4 times over the past nine quarters and Facebook’s grew 1.9 times, while the combined group of new players grew 2.6 times. Customer acquisition costs — the marketing spending necessary to attract each new customer — is going up. That’s unsustainable because in some cases it surpasses the long-term revenue those customers will bring. Meeker suggests cheaper ways to acquire customers, like free trials and unpaid tiers. Freemium business models Freemium business models are growing and scaling. Freemium businesses equals free user experience which enables more usage, engagement, social sharing and network effects. It also equals premium user experience which drives monetization and product innovation. Freemium business evolution started in gaming, evolving and emerging in consumer and enterprise. One of the important factors for this growth is cloud deployment revenue which grew about 58% year-over-year. Another enabler of freemium subscription business models is efficient digital payments which account for more than 50% of day-to-day transactions around the world. Data growth Internet trends indicate that a number of data plumbers are helping a lot of companies collect data, manage connections, and optimize data. In a survey of retail customers, 91% preferred brands that provided personalized offers and recommendations. 83% were willing to passively share data in exchange for personalized services and 74% were willing to actively share data in exchange for personalized experiences. Data volume and utilization is also evolving rapidly. Enterprise surpassed consumer in 2018 and cloud is overtaking both. More data is now stored in the cloud than on private enterprise servers or consumer devices. Jobs and Work Strong economic indicators, internet enabled services, and jobs are helping work. If we look at global GDP. China, the US and India are rising, but Europe is falling. Cross-border trade is at 29% of global GDP and has been growing for many years. Global relative unemployment concerns are very high outside the US and low in itself. Consumer confidence index is high and rising. Unemployment is at a 19-year low but job openings are at an all-time high and wages are rising. On-demand work is creating internet-enabled opportunities and efficiencies. There are 7 million on-demand workers up 22 percent year-on-year. Remote work is also creating internet enabled work opportunities and efficiency. Americans working remotely have risen from 5 percent versus 3 percent in 2000. Online education Education costs and student debt are rising in the US whereas post-secondary education enrollment is slowing. Online education enrollment is high across a diverse base of universities - public, private for-profit, and private not-for-profit.  Top offline institutions are ramping their online offerings at a very rapid rate - most recently University of Pennsylvania, University of London, University of Michigan and UC Boulder. Google's growth in creating certificates for in-demand jobs is growing rapidly which they are doing in collaboration with Coursera. Immigration and Healthcare In the U.S. 60% of the most highly valued tech companies are founded by first or second generation Americans. They employed 1.9 million people last year. USA entitlements account for 61% of government spending versus 42% 30 years ago, and shows no signs of stopping. Healthcare is steadily digitizing, driven by consumers and the trends are very powerful. You can expect more telemedicine and on-demand consultations. For details and infographics, we recommend you to go through the slide deck of the Internet trends report. What Elon Musk and South African conservation can teach us about technology forecasting. Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them Experts present the most pressing issues facing global lawmakers on citizens’ privacy, democracy and the rights to freedom of speech.
Read more
  • 0
  • 0
  • 2977
Banner background image

article-image-how-to-push-docker-images-to-aws-elastic-container-registryecr-tutorial
Savia Lobo
12 Jun 2019
12 min read
Save for later

How to push Docker images to AWS' Elastic Container Registry(ECR) [Tutorial]

Savia Lobo
12 Jun 2019
12 min read
Currently, the most commonly adopted way to store and deliver Docker images is through Docker Registry, an open source application by Docker that hosts Docker repositories. This application can be deployed on-premises, as well as used as a service from multiple providers, such as Docker Hub, Quay.io, and AWS ECR. This article is an excerpt taken from the book Kubernetes on AWS written by Ed Robinson. In this book, you will discover how to utilize the power of Kubernetes to manage and update your applications. In this article, you will learn how to use Docker for pushing images onto ECR. The application is a simple, stateless service, where most of the maintenance work involves making sure that storage is available, safe, and secure. As any seasoned system administrator knows, that is far from an easy ordeal, especially, if there is a large data store. For that reason, and especially if you're just starting out, it is highly recommended to use a hosted solution and let someone else deal with keeping your images safe and readily available. ECR is AWS's approach to a hosted Docker registry, where there's one registry per account. It uses AWS IAM to authenticate and authorize users to push and pull images. By default, the limits for both repositories and images are set to 1,000. Creating a repository To create a repository, it's as simple as executing the following aws ecr command: $ aws ecr create-repository --repository-name randserver This will create a repository for storing our randserver application. Its output should look like this: { "repository": { "repositoryArn": "arn:aws:ecr:eu-central-1:123456789012:repository/randserver", "registryId": "123456789012", "repositoryName": "randserver", "repositoryUri": "123456789012.dkr.ecr.eu-central-1.amazonaws.com/randserver", "createdAt": 1543162198.0 } } A nice addition to your repositories is a life cycle policy that cleans up older versions of your images so that you don't eventually get blocked from pushing a newer version. This can be achieved as follows, using the same aws ecr command: $ aws ecr put-lifecycle-policy --registry-id 123456789012 --repository-name randserver --lifecycle-policy-text '{"rules":[{"rulePriority":10,"description":"Expire old images","selection":{"tagStatus":"any","countType":"imageCountMoreThan","countNumber":800},"action":{"type":"expire"}}]}' This particular policy will start cleaning up once have more than 800 images on the same repository. You could also clean up based on the images, age, or both, as well as consider only some tags in your cleanup. Pushing and pulling images from your workstation In order use your newly-created ECR repository, first we're going to need to authenticate your local Docker daemon against the ECR registry. Once again, aws ecr will help you achieve just that: aws ecr get-login --registry-ids 123456789012 --no-include-email This will output a docker login command that will add a new user-password pair for your Docker configuration. You can copy-paste that command, or you can just run it as follows; the results will be the same: $(aws ecr get-login --registry-ids 123456789012 --no-include-email) Now, pushing and pulling images is just like using any other Docker registry, using the outputted repository URI that we got when creating the repository: $ docker push 123456789012.dkr.ecr.eu-central-1.amazonaws.com/randserver:0.0.1 $ docker pull 123456789012.dkr.ecr.eu-central-1.amazonaws.com/randserver:0.0.1 Setting up privileges for pushing images IAM users' permissions should allow your users to perform strictly only the operations they actually need to, in order to avoid any possible mistakes that might have a larger area of impact. This is also true for ECR management, and to that effect, there are three AWS IAM managed policies that greatly simplify achieving it: AmazonEC2ContainerRegistryFullAccess: This allows a user to perform any operation on your ECR repositories, including deleting them, and should therefore be left for system administrators and owners. AmazonEC2ContainerRegistryPowerUser: This allows a user to push and pull images on any repositories, which is very handy for developers that are actively building and deploying your software. AmazonEC2ContainerRegistryReadOnly: This allows a user to pull images on any repository, which is useful for scenarios where developers are not pushing their software from their workstation, and are instead just pulling internal dependencies to work on their projects. All of these policies can be attached to an IAM user as follows, by replacing the policy name at the end of the ARN with a suitable policy  and pointing --user-name to the user you are managing: $ aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly --user-name johndoe All these AWS managed policies do have an important characteristic—all of them add permissions for all repositories on your registry. You'll probably find several use cases where this is far from ideal—maybe your organization has several teams that do not need access over each other's repositories; maybe you would like to have a user with the power to delete some repositories, but not all; or maybe you just need access to a single repository for Continuous Integration (CI) setup. If your needs match any of these described situations, you should create your own policies with as granular permissions as required. First, we will create an IAM group for the developers of our randserver application: $ aws iam create-group --group-name randserver-developers { "Group": { "Path": "/", "GroupName": "randserver-developers", "GroupId": "AGPAJRDMVLGOJF3ARET5K", "Arn": "arn:aws:iam::123456789012:group/randserver-developers", "CreateDate": "2018-10-25T11:45:42Z" } } Then we'll add the johndoe user to the group: $ aws iam add-user-to-group --group-name randserver-developers --user-name johndoe Now we'll need to create our policy so that we can attach it to the group. Copy this JSON document to a file: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:GetRepositoryPolicy", "ecr:DescribeRepositories", "ecr:ListImages", "ecr:DescribeImages", "ecr:BatchGetImage", "ecr:InitiateLayerUpload", "ecr:UploadLayerPart", "ecr:CompleteLayerUpload", "ecr:PutImage" ], "Resource": "arn:aws:ecr:eu-central-1:123456789012:repository/randserver" }] } To create the policy, execute the following, passing the appropriate path for the JSON document file: $ aws iam create-policy --policy-name EcrPushPullRandserverDevelopers --policy-document file://./policy.json { "Policy": { "PolicyName": "EcrPushPullRandserverDevelopers", "PolicyId": "ANPAITNBFTFWZMI4WFOY6", "Arn": "arn:aws:iam::123456789012:policy/EcrPushPullRandserverDevelopers", "Path": "/", "DefaultVersionId": "v1", "AttachmentCount": 0, "PermissionsBoundaryUsageCount": 0, "IsAttachable": true, "CreateDate": "2018-10-25T12:00:15Z", "UpdateDate": "2018-10-25T12:00:15Z" } } The final step is then to attach the policy to the group, so that johndoe and all future developers of this application can use the repository from their workstation: $ aws iam attach-group-policy --group-name randserver-developers --policy-arn arn:aws:iam::123456789012:policy/EcrPushPullRandserverDevelopers Use images stored on ECR in Kubernetes By attaching  the IAM policy, AmazonEC2ContainerRegistryReadOnly, to the instance profile used by our cluster nodes, allows our nodes to fetch any images in any repository in the AWS account where the cluster resides. In order to use an ECR repository in this manner, you should set the image field of the pod template on your manifest to point to it, such as in the following example: image: 123456789012.dkr.ecr.eu-central-1.amazonaws.com/randserver:0.0.1. Tagging images Whenever a Docker image is pushed to a registry, we need to identify the image with a tag.  A tag can be any alphanumeric string: latest stable v1.7.3 and even c31b1656da70a0b0b683b060187b889c4fd1d958 are both perfectly valid examples of tags that you might use to identify an image that you push to ECR. Depending on how your software is developed and versioned, what you put in this tag might be different. There are three main strategies that might be adopted depending on different types of applications and development processes that we might need to generate images for. Version Control System (VCS) references When you build images from software where the source is managed in a version control system, such as Git, the simplest way of tagging your images, in this case, is to utilize the commit ID (often referred to as an SHA when using Git) from your VCS. This gives you a very simple way to check exactly which version of your code is currently running at any one time. This first strategy is often adopted for applications where small changes are delivered in an incremental fashion. New versions of your images might be pushed multiple times a day and automatically deployed to testing and production-like environments. Good examples of these kinds of applications that are web applications and other software delivered as a service. By pushing a commit ID through an automated testing and release pipeline, you can easily generate deployment manifests for an exact revision of your software. Semantic versions However, this strategy becomes more cumbersome and harder to deal with if you are building container images that are intended to be used by many users, whether that be multiple users within your organisation or even when you publish images publicly for third parties to use. With applications like these, it can be helpful to use a semantic version number that has some meaning, helping those that depend on you image decide if it safe to move to a newer version. A common scheme for these sorts of images is called Semantic Versioning (SemVer). This is a version number made up of three individual numbers separated by dots. These numbers are known as the MAJOR, MINOR, and PATCH version. A semantic version number lays out these numbers in the form MAJOR.MINOR.PATCH. When a number is incremented, the less significant numbers to the right are reset to 0. These version numbers give downstream users useful information about how a new version might affect compatibility: The PATCH version is incremented whenever a bug or security fix is implemented that maintains backwards compatibility The MINOR version is incremented whenever a new feature is added that maintains backwards compatibility Any changes that break backwards compatibility should increment the MAJOR version number This is useful because users of your images know that MINOR or PATCH level changes are unlikely to break anything, so only basic testing should be required when upgrading to a new version. But if upgrading to a new MAJOR version, they ought to check and test the impact on the changes, which might require changes to configuration or integration code. Upstream version numbers Often, when we when build container images that repackage existing software, it is desirable to use the original version number of the packaged software itself. Sometimes, it can help to add a suffix to version the configuration that you're using to package that software with. In larger organizations, it can be common to package software tools with configuration files with organisation-specific default settings. You might find it useful to version the configuration files as well as the software tool. If I were packaging the MySQL database for use in my organization, an image tag might look like 8.0.12-c15, where 8.0.12 refers to the upstream MySQL version and c15 is a version number I have created for the MySQL configuration files included in my container image. Labelling images If you have an even moderately complex workflow for developing and releasing your software, you might quickly find yourself wanting to add even more semantic information about your images into its tag than just a simple version number. This can quickly become unwieldy, as you will need to modify your build and deployment tooling whenever you want to add some extra information. Thankfully, Docker images carry around labels that can be used to store whatever metadata is relevant to your image. Adding a label to your image is done at build time, using the LABEL instruction in your Dockerfile. The LABEL instruction accepts multiple key value pairs in this format: LABEL <key>=<value> <key>=<value> ... Using this instruction, we can store any arbitrary metadata that we find useful on our images. And because the metadata is stored inside the image, unlike tags, it can't be changed. By using appropriate image labels, we can discover the exact revision from our VCS, even if an image has been given an opaque tag, such as latest or stable. If you want to set these labels dynamically at build time, you can also make use of the ARG instruction in your Dockerfile. Let's look at an example of using build arg's to set labels. Here is an example Dockerfile: FROM scratch ARG SHA ARG BEAR=Paddington LABEL git-commit=$GIT_COMMIT \ favorite-bear=$BEAR \ marmalade="5 jars" When we build the container, we can pass values for our labels using the --build-arg flag. This is useful when we want to pass dynamic values such as a Git commit reference: docker build --build-arg SHA=`git rev-parse --short HEAD` -t bear . As with the labels that Kubernetes allows you to attach to the objects in your cluster, you are free to label your images with whatever scheme you choose, and save whatever metadata makes sense for your organization. The Open Container Initiative (OCI), an organization that promotes standards for container runtimes and their image formats, has proposed a standard set of labels that can be used to provide useful metadata that can then be used by other tools that understand them. If you decide to add labels to your container images, choosing to use part or all of this set of labels might be a good place to start. To know more about these labels, you can head over to our book. Summary In this article, we discovered how to push images from our own workstations, how to use IAM permissions to restrict access to our images, and how to allow Kubernetes to pull container images directly from ECR. To know more about how to deploy a production-ready Kubernetes cluster on the AWS platform, and more, head over to our book Kubernetes on AWS. All Docker versions are now vulnerable to a symlink race attack GAO recommends for a US version of the GDPR privacy laws Cloud pricing comparison: AWS vs Azure
Read more
  • 0
  • 0
  • 21091

article-image-polyglot-programming-allows-developers-to-choose-the-right-language-to-solve-tough-engineering-problems
Richard Gall
11 Jun 2019
9 min read
Save for later

Polyglot programming allows developers to choose the right language to solve tough engineering problems

Richard Gall
11 Jun 2019
9 min read
Programming languages can divide opinion. They are, for many engineers, a mark of identity. Yes, they say something about the kind of work you do, but they also say something about who you are and what you value. But this is changing, with polyglot programming becoming a powerful and important trend. We’re moving towards a world in which developers are no longer as loyal to their chosen programming languages as they were. Instead, they are more flexible and open minded about the languages they use. This year’s Skill Up report highlights that there are a number of different drivers behind the programming languages developers use which, in turn, imply a level of contextual decision making. Put simply, developers today are less likely to stick with a specific programming language, and instead move between them depending on the problems they are trying to solve and the tasks they need to accomplish. Download this year's Skill Up report here. [caption id="attachment_28338" align="aligncenter" width="554"] Skill Up 2019 data[/caption] As the data above shows, languages aren’t often determined by organizational requirements. They are more likely to be if you’re primarily using Java or C#, but that makes sense as these are languages that have long been associated with proprietary software organizations (Oracle and Microsoft respectively); in fact, programming languages are often chosen due to projects and use cases. The return to programming language standardization This is something backed up by the most recent ThoughtWorks Radar, published in April. Polyglot programming finally moved its way into the Adopt ‘quadrant’. This is after 9 years of living in the Trial quadrant. Part of the reason for this, ThoughtWorks explains, is that the organization is seeing a reaction against this flexibility, writing that “we're seeing a new push to standardize language stacks by both developers and enterprises.” The organization argues - quite rightly - that , “promoting a few languages that support different ecosystems or language features is important for both enterprises to accelerate processes and go live more quickly and developers to have the right tools to solve the problem at hand.” Arguably, we’re in the midst of a conflict within software engineering. On the one hand the drive to standardize tooling in the face of increasingly complex distributed systems makes sense, but it’s one that we should resist. This level of standardization will ultimately remove decision making power from engineers. What’s driving polyglot programming? It’s probably worth digging a little deeper into why developers are starting to be more flexible about the languages they use. One of the most important drivers of this change is the dominance of Agile as a software engineering methodology. As Agile has become embedded in the software industry, software engineers have found themselves working across the stack rather than specializing in a specific part of it. Full-stack development and polyglot programming This is something suggested by Stack Overflow survey data. This year 51.9% of developers described themselves as full-stack developers compared to 50.0% describing themselves as backend developers. This is a big change from 2018 where 57.9% described themselves as backend developers compared to 48.2% of respondents calling themselves full-stack developers. Given earlier Stack Overflow data from 2016 indicates that full-stack developers are comfortable using more languages and frameworks than other roles, it’s understandable that today we’re seeing developers take more ownership and control over the languages (and, indeed, other tools) they use. With developers sitting in small Agile teams working more closely to problem domains than they may have been a decade ago, the power is now much more in their hands to select and use the programming languages and tools that are most appropriate. If infrastructure is code, more people are writing code... which means more people are using programming languages But it's not just about full-stack development. With infrastructure today being treated as code, it makes sense that those responsible for managing and configuring it - sysadmins, SREs, systems engineers - need to use programming languages. This is a dramatic shift in how we think about system administration and infrastructure management; programming languages are important to a whole new group of people. Python and polyglot programming The popularity of Python is symptomatic of this industry-wide change. Not only is it a language primarily selected due to use case (as the data above shows), it’s also a language that’s popular across the industry. When we asked our survey respondents what language they want to learn next, Python came out on top regardless of their primary programming language. [caption id="attachment_28340" align="aligncenter" width="563"] Skill Up 2019 data[/caption] This highlights that Python has appeal across the industry. It doesn’t fit neatly into a specific job role, it isn’t designed for a specific task. It’s flexible - as developers today need to be. Although it’s true that Python’s popularity is being driven by machine learning, it would be wrong to see this as the sole driver. It is, in fact, its wide range of use cases ranging from scripting to building web services and APIs that is making Python so popular. Indeed, it’s worth noting that Python is viewed as a tool as much as it is a programming language. When we specifically asked survey respondents what tools they wanted to learn, Python came up again, suggesting it occupies a category unlike every other programming language. [caption id="attachment_28341" align="aligncenter" width="585"] Skill Up 2019 data[/caption] What about other programming languages? The popularity of Python is a perfect starting point for today’s polyglot programmer. It’s relatively easy to learn, and it can be used for a range of different tasks. But if we’re to convincingly talk about a new age of programming, where developers are comfortable using multiple programming languages, we have to look beyond the popularity of Python at other programming languages. Perhaps a good way to do this is to look at the languages developers primarily using Python want to learn next. If you look at the graphic above, there’s no clear winner for Python developers. While every other language is showing significant interest in Python, Python developers are looking at a range of different languages. This alone isn’t evidence of the popularity of polyglot programming, but it does indicate some level of fragmentation in the programming language ‘marketplace’. Or, to put it another way, we’re moving to a place where it becomes much more difficult to say that given languages are definitive in a specific field. The popularity of Golang Go has particular appeal for Python programmers with almost 20% saying they want to learn it next. This isn’t that surprising - Go is a flexible language that has many applications, from microservices to machine learning, but most importantly can give you incredible performance. With powerful concurrency, goroutines, and garbage collection, it has features designed to ensure application efficiency. Given it was designed by Google this isn’t that surprising - it’s almost purpose built for software engineering today. It’s popularity with JavaScript developers further confirms that it holds significant developer mindshare, particularly among those in positions where projects and use cases demand flexibility. Read next: Is Golang truly community driven and does it really matter? A return to C++ An interesting contrast to the popularity of Go is the relative popularity of C++ in our Skill Up results. C++ is ancient in comparison to Golang, but it nevertheless seems to occupy a similar level of developer mindshare. The reasons are probably similar - it’s another language that can give you incredible power and performance. For Python developers part of the attraction is down to its usefulness for deep learning (TensorFlow is written in C++). But more than that, C++ is also an important foundational language. While it isn’t easy to learn, it does help you to understand some of the fundamentals of software. From this perspective, it provides a useful starting point to go on and learn other languages; it’s a vital piece that can unlock the puzzle of polyglot programming. A more mature JavaScript JavaScript also came up in our Skill Up survey results. Indeed, Python developers are keen on the language, which tells us something about the types of tasks Python developers are doing as well as the way JavaScript has matured. On the one hand, Python developers are starting to see the value of web-based technologies, while on the other JavaScript is also expanding in scope to become much more than just a front end programming language. Read next: Is web development dying? Kotlin and TypeScript The appearance of other smaller languages in our survey results emphasises the way in which the language ecosystem is fragmenting. TypeScript, for example, may not ever supplant JavaScript, but it could become an important addition to a developer’s skill set if they begin running into problems scaling JavaScript. Kotlin represents something similar for Java developers - indeed, it could even eventually out pace its older relative. But again, it’s popularity will emerge according to specific use cases. It will begin to take hold in particular where Java’s limitations become more exposed, such as in modern app development. Rust: a goldilocks programming language perfect for polyglot programming One final mention deserves to go to Rust. In many ways Rust’s popularity is related to the continued relevance of C++, but it offers some improvements - essentially, it’s easier to leverage Rust, while using C++ to its full potential requires experience and skill. Read next: How Deliveroo migrated from Ruby to Rust without breaking production One commenter on Hacker News described it as a ‘Goldilocks’ language - “It's not so alien as to make it inaccessible, while being alien enough that you'll learn something from it.” This is arguably what a programming language should be like in a world where polyglot programming rules. It shouldn’t be so complex as to consume your time and energy, but it should also be sophisticated enough to allow you to solve difficult engineering problems. Learning new programming languages makes it easier to solve engineering problems The value of learning multiple programming languages is indisputable. Python is the language that’s changing the game, becoming a vital additional extra to a range of developers from different backgrounds, but there are plenty of other languages that could prove useful. What’s ultimately important is to explore the options that are available and to start using a language that’s right for you. Indeed, that’s not always immediately obvious - but don’t let that put you off. Give yourself some time to explore new languages and find the one that’s going to work for you.
Read more
  • 0
  • 0
  • 7051
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deep-learning-models-have-massive-carbon-footprints-can-photonic-chips-help-reduce-power-consumption
Sugandha Lahoti
11 Jun 2019
10 min read
Save for later

Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?

Sugandha Lahoti
11 Jun 2019
10 min read
Most of the recent breakthroughs in Artificial Intelligence are driven by data and computation. What is essentially missing is the energy cost. Most large AI networks require huge number of training data to ensure accuracy. However, these accuracy improvements depend on the availability of exceptionally large computational resources. The larger the computation resource, the more energy it consumes. This  not only is costly financially (due to the cost of hardware, cloud compute, and electricity) but is also straining the environment, due to the carbon footprint required to fuel modern tensor processing hardware. Considering the climate change repercussions we are facing on a daily basis, consensus is building on the need for AI research ethics to include a focus on minimizing and offsetting the carbon footprint of research. Researchers should also put energy cost in results of research papers alongside time, accuracy, etc. The process of deep learning outsizing environmental impact was further highlighted in a recent research paper published by MIT researchers. In the paper titled “Energy and Policy Considerations for Deep Learning in NLP”, researchers performed a life cycle assessment for training several common large AI models. They quantified the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and provided recommendations to reduce costs and improve equity in NLP research and practice. They have also provided recommendations to reduce costs and improve equity in NLP research and practice. Per the paper, training AI models can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes the manufacture of the car itself). It is estimated that we must cut carbon emissions by half over the next decade to deter escalating rates of natural disaster. Source This speaks volumes about the carbon offset and brings conversation to the returns on heavy (carbon) investment of deep learning and if it is really worth the marginal improvement in predictive accuracy over cheaper, alternative methods. This news alarmed people tremendously. https://twitter.com/sakthigeek/status/1137555650718908416 https://twitter.com/vinodkpg/status/1129605865760149504 https://twitter.com/Kobotic/status/1137681505541484545         Even if some of this energy may come from renewable or carbon credit-offset resources, the high energy demands of these models are still a concern. This is because the current energy is derived from carbon-neural sources in many locations, and even when renewable energy is available, it is limited to the equipment produced to store it. The carbon footprint of NLP models The researchers in this paper adhere specifically to NLP models. They looked at four models, the Transformer, ELMo, BERT, and GPT-2, and trained each on a single GPU for up to a day to measure its power draw. Next, they used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process. This number was then converted into pounds of carbon dioxide equivalent based on the average energy mix in the US, which closely matches the energy mix used by Amazon’s AWS, the largest cloud services provider. Source The researchers found that environmental costs of training grew proportionally to model size. It exponentially increased when additional tuning steps were used to increase the model’s final accuracy. In particular, neural architecture search had high associated costs for little performance benefit. Neural architecture search is a tuning process which tries to optimize a model by incrementally tweaking a neural network’s design through exhaustive trial and error. The researchers also noted that these figures should only be considered as baseline. In practice, AI researchers mostly develop a new model from scratch or adapt an existing model to a new data set, both require many more rounds of training and tuning. Based on their findings, the authors recommend certain proposals to heighten the awareness of this issue to the NLP community and promote mindful practice and policy: Researchers should report training time and sensitivity to hyperparameters. There should be a standard, hardware independent measurement of training time, such as gigaflops required to convergence. There should also be a  standard measurement of model sensitivity to data and hyperparameters, such as variance with respect to hyperparameters searched. Academic researchers should get equitable access to computation resources. This trend toward training huge models on tons of data is not feasible for academics, because they don’t have the computational resources. It will be more cost effective for academic researchers to pool resources to build shared compute centers at the level of funding agencies, such as the U.S. National Science Foundation. Researchers should prioritize computationally efficient hardware and algorithms. For instance, developers could aid in reducing the energy associated with model tuning by providing easy-to-use APIs implementing more efficient alternatives to brute-force. The next step is to introduce energy costs as a standard metric, that researchers are expected to report their findings. They should also try to minimise carbon footprint by developing compute efficient training methods such as new ML algos, or new engineering tools to make existing ones more compute efficient. Above all, we need to formulate strict public policies that steer digital technologies toward speeding a clean energy transition while mitigating the risks. Another factor which contributes to high energy consumptions are Optical neural networks which are used for most deep learning tasks. To tackle that issue, researchers and major tech companies — including Google, IBM, and Tesla — have developed “AI accelerators,” specialized chips that improve the speed and efficiency of training and testing neural networks. However, these AI accelerators use electricity and have a theoretical minimum limit for energy consumption. Also, most present day ASICs are based on CMOS technology and suffer from the interconnect problem. Even in highly optimized architectures where data are stored in register files close to the logic units, a majority of the energy consumption comes from data movement, not logic. Analog crossbar arrays based on CMOS gates or memristors promise better performance, but as analog electronic devices, they suffer from calibration issues and limited accuracy. Implementing chips that use light instead of electricity Another group of MIT researchers have developed a “photonic” chip that uses light instead of electricity, and consumes relatively little power in the process. The photonic accelerator uses more compact optical components and optical signal-processing techniques, to drastically reduce both power consumption and chip area. Practical applications for such chips can also include reducing energy consumption in data centers. “In response to vast increases in data storage and computational capacity in the last decade, the amount of energy used by data centers has doubled every four years, and is expected to triple in the next 10 years.” https://twitter.com/profwernimont/status/1137402420823306240 The chip could be used to process massive neural networks millions of times more efficiently than today’s classical computers. How the photonic chip works? The researchers have given a detailed explanation of the chip’s working in their research paper, “Large-Scale Optical Neural Networks Based on Photoelectric Multiplication”. The chip relies on a compact, energy efficient “optoelectronic” scheme that encodes data with optical signals, but uses “balanced homodyne detection” for matrix multiplication. This technique that produces a measurable electrical signal after calculating the product of the amplitudes (wave heights) of two optical signals. Pulses of light encoded with information about the input and output neurons for each neural network layer — which are needed to train the network — flow through a single channel.  Optical signals carrying the neuron and weight data fan out to grid of homodyne photodetectors. The photodetectors use the amplitude of the signals to compute an output value for each neuron. Each detector feeds an electrical output signal for each neuron into a modulator, which converts the signal back into a light pulse. That optical signal becomes the input for the next layer, and so on. Limitation of Photonic accelerators Photonic accelerators generally have an unavoidable noise in the signal. The more light that’s fed into the chip, the less noise and greater accuracy. Less input light increases efficiency but negatively impacts the neural network’s performance. The ideal condition is achieved when AI accelerators is measured in how many joules it takes to perform a single operation of multiplying two numbers. Traditional accelerators are measured in picojoules, or one-trillionth of a joule. Photonic accelerators measure in attojoules, which is a million times more efficient. In their simulations, the researchers found their photonic accelerator could operate with sub-attojoule efficiency. Tech companies are the largest contributors of carbon footprint The realization that training an AI model can produce emissions equivalent to a five cars, should make carbon footprint of artificial intelligence an important consideration for researchers and companies going forward. UMass Amherst’s Emma Strubell, one of the research team and co-author of the paper said, “I’m not against energy use in the name of advancing science, obviously, but I think we could do better in terms of considering the trade off between required energy and resulting model improvement.” “I think large tech companies that use AI throughout their products are likely the largest contributors to this type of energy use,” Strubell said. “I do think that they are increasingly aware of these issues, and there are also financial incentives for them to curb energy use.” In 2016, Google’s ‘DeepMind’ was able to reduce the energy required to cool Google Data Centers by 30%. This full-fledged AI system has features including continuous monitoring and human override. Recently Microsoft doubled its internal carbon fee to $15 per metric ton on all carbon emissions. The funds from this higher fee will maintain Microsoft’s carbon neutrality and help meet their sustainability goals. On the other hand, Microsoft is also two years into a seven-year deal—rumored to be worth over a billion dollars—to help Chevron, one of the world’s largest oil companies, better extract and distribute oil. https://twitter.com/AkwyZ/status/1137020554567987200 Amazon had announced that it would power data centers with 100 percent renewable energy without a dedicated timeline. Since 2018 Amazon has reportedly slowed down its efforts to use renewable energy using only 50 percent.  It has also not announced any new deals to supply clean energy to its data centers since 2016, according to a report by Greenpeace, and it quietly abandoned plans for one of its last scheduled wind farms last year. In April, over 4,520 Amazon employees organized against Amazon’s continued profiting from climate devastation. However, Amazon rejected all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting. Both these studies’ researchers illustrate the dire need to change our outlook towards building Artificial Intelligence models and chips that have an impact on the carbon footprint. However, this does not mean halting the research of AI altogether. Instead there should be an awareness of the environmental impact that training AI models might have. Which in turn can inspire researchers to develop more efficient hardware and algorithms for the future. Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change. Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models. Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 5523

article-image-microsofts-xbox-team-at-e3-2019-project-scarlett-ai-powered-flight-simulator-keanu-reeves-in-cyberpunk-2077-and-more
Bhagyashree R
11 Jun 2019
6 min read
Save for later

Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more

Bhagyashree R
11 Jun 2019
6 min read
On Sunday at E3 2019, Microsoft made some really big announcements that had the audience screaming. These included release date of Project Scarlett, Xbox One successor, more than 60 game trailers, Keanu Reeves humbling the stage for promoting Cyberpunk 2077, and much more. E3, which stands for Electronic Entertainment Expo, is one of the biggest gaming events of the year. Its official dates are June 11-13, however, these dates are just for the shows happening at Los Angeles Convention Center. The press conferences were held on June 8 and 9. Along with hosting the world premiere of several computer and video games, this event also showcases new hardware and software products that take the gaming experience to the next level. Here are some of the highlights from Microsoft’s press conference: Project Scarlett will arrive in fall 2020 with Halo infinite Rumors have been going around about the next-generation of Xbox since December last year. Putting all these rumors to rest, Microsoft officially announced that Project Scarlett is planned to release during fall next year. The tech giant further shared that the next big upcoming space war game, Halo Infinite will launch alongside Project Scarlett. According to Microsoft, we can expect this new device to be four times more powerful than Xbox One X. It includes a custom designed CPU based on AMD’s Zen 2 and Radeon RDNA architecture. It supports 8K gaming, framerates of 120fps, and ray-tracing. The device will also include a non-mechanical SSD hard drive enabling faster game loads than its older mechanical hard drives. https://youtu.be/-ktN4bycj9s xCloud will open for public trials in October, one month ahead of Google’s Stadia After giving a brief live demonstration of its upcoming xCloud game streaming service in March, Microsoft announced that it will be available to the public in October this year. This announcement seems to be a direct response to Google’s Stadia, which was revealed in March and will make its public debut in November. Along with sharing the release date, the tech giant also gave E3 attendees the first hands-on trial of the service. At the event, Xbox chief Phil Spencer said, “Two months ago we connected all Xbox developers to Project xCloud. Today, we invite those of you here at E3 for our first public hands-on of Project xCloud. To experience the freedom to play right here at the show.” Microsoft built xCloud to provide gamers with a new way to play Xbox games where the gamers decide how and when they want to play. With xCloud Console Streaming you will be able to “turn your Xbox One into your own personal and free xCloud server.” It will enable you to stream entire Xbox One library including games from Xbox Game Pass to any device of your choice. https://twitter.com/Xbox/status/1137833126959280128 Xbox Elite 2 Wireless Controller to reach you on November 4th for $179.99 Microsoft announced the launch of Xbox Elite Wireless Controller Series 2, which it says is the totally re-engineered version of the previous Elite controller. It is open for pre-orders now and will be available on November 4th in 24 countries, priced at $179.99. The controller’s new adjustable tension thumbsticks provide improved precision and shorter hair trigger locks enable you to fire faster. The device includes USB-C support, Bluetooth, and a rechargeable battery that lasts for up to 40 hours per charge. Along with all these updates, it also allows you to do limitless customizations with the Xbox Accessories app on Xbox One and Windows 10 PC. https://youtu.be/SYVw0KqQiOI Cyberpunk 2077 featuring Keanu Reeves to release on April 16th, 2020 Last year, CD Projekt Red, the creator of Cyberpunk 2077 said that E3 2019 will be its “most important E3” ever and we cannot agree more. Keanu Reeves aka John Wick himself came to announce the release date of Cyberpunk 2077, which is April 16th, 2020. The trailer of the game ended with the biggest surprise for the audience: the appearance of Reeves’ as a character apparently named “Mr. Fusion.” The crowd went wild as soon as Reeves took to the stage to promote Cyberpunk 2077. When the actor said that walking in the streets of Cyberpunk 2077 will be breathtaking, a guy from the crowd yelled, "you're breathtaking." To which Reeves kindly replied: https://twitter.com/Xbox/status/1137854943006605312 The guy from the crowd was YouTuber Peter Sark, who shared on Twitter that "Keanu Reeves just announced to the world that I'm breathtaking." https://twitter.com/petertheleader/status/1137846108305014784 CD Projekt Red is now giving him a free collector’s edition copy of the game, which is amazing! For everyone else, don’t be upset as you can also pre-order Cyberpunk 2077’s physical and collector's edition from their official website. Though as xCloud, attendees will not be able to get a hands-on trial now, they will still be able to see the demo presentation. The demo is happening at the South Hall in the LA Convention Center, booth 1023, on June 11-13th. The new Microsoft Flight Simulator is powered by Azure cloud AI Microsoft showcased a new installment of its long-running Microsoft Flight Simulator series. Powered by Azure cloud artificial intelligence and satellite data, this updated simulator is capable of rendering amazingly real visuals. Though not many details have been shared, its trailer shows a stunning real-time 4K footage of lifelike landscapes and aircraft. Have a look at it yourself! https://youtu.be/ReDDgFfWlS4 Though this simulator has been PC-only in the past, this newly updated simulator is coming to Xbox One and will also be available via Xbox Game Pass. The specific release dates are unknown but they're expected to be out next year. Double Fine joins Xbox Game Studios At the event, Tim Schafer, the founder of Double Fine, shared that his company has now joined Microsoft’s ever-growing gaming studio. Double Fine Productions is the studio behind games like Psychonauts, Brutal Legend, Broken Age. He jokingly said, "For the last 19 years, we've been independent. Then Microsoft came to us and said, 'What if we gave you a bunch of money.' And I said 'OK, yeah.'" Schafer posted another video on YouTube explaining what this means for the company’s existing commitments. He shared that Psychonauts 2 will be provided to crowdfunders on the platforms they chose, but going forward the company will focus on "Xbox, Game Pass, and PC.” https://youtu.be/uR9yKz2C3dY These were just a few key announcements from the event. To know more, you can watch Microsoft keynote on YouTube: https://www.youtube.com/watch?v=zeYQ-kPF0iQ 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies
Read more
  • 0
  • 0
  • 3387

article-image-salesforce-is-buying-tableau-in-a-15-7-billion-all-stock-deal
Richard Gall
10 Jun 2019
4 min read
Save for later

Salesforce is buying Tableau in a $15.7 billion all-stock deal

Richard Gall
10 Jun 2019
4 min read
Salesforce, one of the world's leading CRM platforms, is buying data visualization software Tableau in an all-stock deal worth $15.7 billion. The news comes just days after it emerged that Google is buying one of Tableau's competitors in the data visualization market, Looker. Taken together, the stories highlight the importance of analytics to some of the planet's biggest companies. They suggest that despite years of the big data revolution, it's only now that market-leading platforms are starting to realise that their customers want the level of capabilities offered by the best in the data visualization space. Salesforce shareholders will use their stock to purchase Tableau. As the press release published on the Salesforce site explains "each share of Tableau Class A and Class B common stock will be exchanged for 1.103 shares of Salesforce common stock, representing an enterprise value of $15.7 billion (net of cash), based on the trailing 3-day volume weighted average price of Salesforce's shares as of June 7, 2019." The acquisition is expected to be completed by the end of October 2019. https://twitter.com/tableau/status/1138040596604575750 Why is Salesforce buying Tableau? The deal is an incredible result for Tableau shareholders. At the end of last week, its market cap was $10.7 billion. This has led to some scepticism about just how good a deal this is for Salesforce. One commenter on Hacker News said "this seems really high for a company without earnings and a weird growth curve. Their ticker is cool and maybe sales force [sic] wants to be DATA on nasdaq. Otherwise, it will be hard to justify this high markup for a tool company." With Salesforce shares dropping 4.5% as markets opened this week, it seems investors are inclined to agree - Salesforce is certainly paying a premium for Tableau. However, whatever the long term impact of the acquisition, the price paid underlines the fact that Salesforce views Tableau as exceptionally important to its long term strategy. It opens up an opportunity for Salesforce to reposition and redefine itself as much more than just a CRM platform. It means it can start compete with the likes of Microsoft, which has a full suite of professional and business intelligence tools. Moreover, it also provides the platform with another way of potentially onboarding customers - given Tableau is well-known as a powerful yet accessible data visualization tool, it create an avenue through which new users can find their way to the Salesforce product. Marc Benioff, Chair and co-CEO of Salesforce, said "we are bringing together the world’s #1 CRM with the #1 analytics platform. Tableau helps people see and understand data, and Salesforce helps people engage and understand customers. It’s truly the best of both worlds for our customers--bringing together two critical platforms that every customer needs to understand their world.” Tableau has been a target for Salesforce for some time. Leaked documents from 2016 found that the data visualization was one of 14 companies that Salesforce had an interest in (another was LinkedIn, which would eventually be purchased by Microsoft). Read next: Alteryx vs. Tableau: Choosing the right data analytics tool for your business What's in it for Tableau (aside from the money...)? For Tableau, there are many other benefits of being purchased by Salesforce alongside the money. Primarily this is about expanding the platform's reach - Salesforce users are people who are interested in data with a huge range of use cases. By joining up with Salesforce, Tableau will become their go-to data visualization tool. "As our two companies began joint discussions," Tableau CEO Adam Selipsky said, "the possibilities of what we might do together became more and more intriguing. They have leading capabilities across many CRM areas including sales, marketing, service, application integration, AI for analytics and more. They have a vast number of field personnel selling to and servicing customers. They have incredible reach into the fabric of so many customers, all of whom need rich analytics capabilities and visual interfaces... On behalf of our customers, we began to dream about we might accomplish if we could combine our ability to help people see and understand data with their ability to help people engage and understand customers." What will happen to Tableau? Tableau won't be going anywhere. It will continue to exist under its own brand with the current leadership all remaining, including Selipsky. What does this all mean for the technology market? At the moment, it's too early to say - but the last year or so has seen some major high-profile acquisitions by tech companies. Perhaps we're seeing the emergence of a tooling arms race as the biggest organizations attempt to arm themselves with ecosystems of established market-leading tools. Whether this is good or bad for users remains to be seen, however.  
Read more
  • 0
  • 0
  • 3050

article-image-did-unfettered-growth-kill-maker-media-financial-crisis-leads-company-to-shutdown-maker-faire-and-lay-off-all-staff
Savia Lobo
10 Jun 2019
5 min read
Save for later

Did unfettered growth kill Maker Media? Financial crisis leads company to shutdown Maker Faire and lay off all staff

Savia Lobo
10 Jun 2019
5 min read
Updated: On July 10, 2019, Dougherty announced the relaunch of Maker Faire and Maker Media with the new name “Make Community“. Maker Media Inc., the company behind Maker Faire, the popular event that hosts arts, science, and engineering DIY projects for children and their parents, has laid off all its employees--22 employees--and have decided to shut down due to financial troubles. In January 2005, the company first started off with MAKE, an American bimonthly magazine focused on do it yourself and/or DIWO projects involving computers, electronics, robotics, metalworking, woodworking, etc. for both adults and children. In 2006, the company first held its Maker Faire event, that lets attendees wander amidst giant, inspiring art and engineering installations. Maker Faire now includes 200 owned and licensed events per year in over 40 countries. The Maker movement gained momentum and popularity when MAKE magazine first started publishing 15 years ago.  The movement emerged as a dominant source of livelihood as individuals found ways to build small businesses using their creative activity. In 2014, The WhiteHouse blog posted an article stating, “Maker Faires and similar events can inspire more people to become entrepreneurs and to pursue careers in design, advanced manufacturing, and the related fields of science, technology, engineering and mathematics (STEM).” With funding from the Department of Labor, “the AFL-CIO and Carnegie Mellon University are partnering with TechShop Pittsburgh to create an apprenticeship program for 21st-century manufacturing and encourage startups to manufacture domestically.” Recently, researchers from Baylor University and the University of North Carolina, in their research paper, have highlighted opportunities for studying the conditions under which the Maker movement might foster entrepreneurship outcomes. Dale Dougherty, Maker Media Inc.’s founder and CEO, told TechCrunch, “I started this 15 years ago and it’s always been a struggle as a business to make this work. Print publishing is not a great business for anybody, but it works…barely. Events are hard . . . there was a drop off in corporate sponsorship”. “Microsoft and Autodesk failed to sponsor this year’s flagship Bay Area Maker Faire”, TechCrunch reports. Dougherty further told that the company is trying to keep the servers running. “I hope to be able to get control of the assets of the company and restart it. We’re not necessarily going to do everything we did in the past but I’m committed to keeping the print magazine going and the Maker Faire licensing program”, he further added. In 2016, the company laid off 17 of its employees, followed by 8 employees recently in March. “They’ve been paid their owed wages and PTO, but did not receive any severance or two-week notice”, TechCrunch reports. These layoffs may have hinted the staff of the financial crisis affecting the company. Maker Media Inc. had raised $10 million from Obvious Ventures, Raine Ventures, and Floodgate. Dougherty says, “It started as a venture-backed company but we realized it wasn’t a venture-backed opportunity. The company wasn’t that interesting to its investors anymore. It was failing as a business but not as a mission. Should it be a non-profit or something like that? Some of our best successes, for instance, are in education.” The company has a huge public following for its products. Dougherty told TechCrunch that despite the rain, Maker Faire’s big Bay Area event last week met its ticket sales target. Also, about 1.45 million people attended its events in 2016. “MAKE: magazine had 125,000 paid subscribers and the company had racked up over one million YouTube subscribers. But high production costs in expensive cities and a proliferation of free DIY project content online had strained Maker Media”, writes TechCrunch. Dougherty told TechCrunch he has been overwhelmed by the support shown by the Maker community. As of now, licensed Maker Faire events around the world will proceed as planned. “Dougherty also says he’s aware of Oculus co-founder Palmer Luckey’s interest in funding the company, and a GoFundMe page started for it”, TechCrunch reports. Mike Senese, Executive Editor, MAKE magazine, tweeted, “Nothing but love and admiration for the team that I got to spend the last six years with, and the incredible community that made this amazing part of my life a reality.” https://twitter.com/donttrythis/status/1137374732733493248 https://twitter.com/xeni/status/1137395288262373376 https://twitter.com/chr1sa/status/1137518221232238592 Former Mythbusters co-host Adam Savage, who was a regular presence at the Maker Faire, told The Verge, “Make Media has created so many important new connections between people across the world. It showed the power from the act of creation. We are the better for its existence and I am sad. I also believe that something new will grow from what they built. The ground they laid is too fertile to lie fallow for long.” On July 10, 2019, Dougherty announced he’ll relaunch Maker Faire and Maker Media with the new name “Make Community“. The official launch of Make Community will supposedly be next week. The company is also working on a new issue of Make Magazine that is planned to be published quarterly and the online archives of its do-it-yourself project guides will remain available. Dougherty told TechCrunch “with the goal that we can get back up to speed as a business, and start generating revenue and a magazine again. This is where the community support needs to come in because I can’t fund it for very long.” GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism
Read more
  • 0
  • 0
  • 2177
article-image-businesses-need-to-learn-how-to-manage-cloud-costs-to-get-real-value-from-serverless-and-machine-learning-as-a-service
Richard Gall
10 Jun 2019
7 min read
Save for later

Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service

Richard Gall
10 Jun 2019
7 min read
This year’s Skill Up survey threw a spotlight on the challenges developers and engineering teams face when it comes to cloud. Indeed, it even highlighted the extent to which cloud is still a nascent trend for many developers, even though it feels so mainstream within the industry - almost half of respondents aren’t using cloud at all. But for those that do use cloud, the survey results also illustrated some of the specific ways that people are using or plan to use cloud platforms, as well as highlighting the biggest challenges and mistakes organisations are making when it comes to cloud. What came out as particularly important is that the limitations and the opportunities of cloud must be thought of together. With our research finding that cost only becomes important once a cloud platform is being used, it’s clear that if we’re to successfully - and cost effectively - use the cloud platforms we do, understanding the relationship with cost and opportunity over a sustained period of time (rather than, say, a month) is absolutely essential. As one of our respondents told us “businesses are still figuring out how to leverage cloud computing for their business needs and haven't quite got the cost model figured out.” Why does cost pose such a problem when it comes to cloud computing? In this year’s survey, we asked people what their primary motivations for using cloud are. The key motivators were use case and employment (ie. the decision was out of the respondent’s hands), but it was striking to see cost as only a minor consideration. Placed in the broader context of discussions around efficiency and a tightening global market, this seemed remarkable. It appears that people aren’t entering the cloud marketplace with cost as a top consideration. In contrast however, this picture changes when we asked respondents about the biggest limiting factors for their chosen cloud platforms. At this point, cost becomes a much more important factor. This highlights that the reality of cloud costs only become apparent - or rather, becomes more apparent - once a cloud platform is implemented and being used. From this we can infer that there is a lack of strategic planning in cloud purchasing. It’s almost as if technology leaders are falling into certain cloud platforms based on commonplace assumptions about what’s right. This then has consequences further down the line. We need to think about cloud cost and functionality together The fact that functionality is also a key limitation is also important to note here - in fact, it is actually closely tied up with cost, insofar as the functionality of each respective cloud platform is very neatly defined by its pricing structure. Take serverless, for example - although it’s typically regarded as something that can be cost-effective for organizations, it can prove costly when you start to scale workloads. You might save more money simply by optimizing your infrastructure. What this means in practice is that the features you want to exploit within your cloud platform should be approached with a clear sense of how it’s going to be used and how it’s going to fit in the evolution of your business and technology in the medium and long term future. Getting the most from leading cloud trends There were two distinct trends that developers identified as the most exciting: machine learning and serverless. Although both are very different, they both hold a promise of efficiency. Whether that’s the efficiency in moving away from traditional means of hosting to cloud-based functions to powerful data processing and machine-led decision making at scale, the fundamentals of both trends are about managing economies of scale in ways that would have been impossible half a decade ago. This plays into some of the issues around cost. If serverless and machine learning both appear to offer ways of saving on spending or radically driving growth, when that doesn’t quite turn out in the way technology purchasers expected it would, the relationship between cost and features can become a little bit strained. Serverless The idea that serverless will save you money is popular. And in general, it is inexpensive. The pricing structures of both AWS and Azure make Functions as a Service (FaaS) particularly attractive. It means you’ll no longer be spending money on provisioning compute resources you don’t actually need, with your provider managing the necessary elasticity. Read next: The Future of Cloud lies in revisiting the designs and limitations of today’s notion of ‘serverless computing’, say UC Berkeley researchers However, as we've already seen, serverless doesn't guarantee cost efficiency. You need to properly understand how you're going to use serverless to ensure that it's not costing you big money without you realising it. One way of using it might be to employ it for very specific workloads, allowing you to experiment in a relatively risk-free manner before employing it elsewhere - whatever you decide, you must ensure that the scope and purpose of the project is clear. Machine learning as a Service Machine learning - or deep learning in particular - is very expensive to do. This is one of the reasons that machine learning on cloud - machine learning as a service - is one of the most attractive features of many cloud platforms. But it’s not just about cost. Using cloud-based machine learning tools also removes some of the barriers to entry, making it easier for engineers who don’t necessarily have extensive training in the field to actually start using machine learning models in various ways. However, this does come with some limitations - and just as with serverless, you really do need to understand and even visualize how you’re going to use machine learning to ensure that you’re not just wasting time and energy with machine learning cloud features. You need to be clear about exactly how you’re going to use machine learning, what data you’re going to use, where it’s going to be stored, and what the end result should look like. Perhaps you want to embed machine learning capabilities inside an app? Or perhaps you want to run algorithms on existing data to inform internal decisions? Whatever it is, all these questions are important. These types of questions will also impact the type of platform you select. Google’s Cloud Platform is far and away the go-to platform for machine learning (this is one of the reasons why so many respondents said their motivation for using it was use case), but bear in mind that this could lead to some issues if the bulk of your data is typically stored on, say, AWS - you’ll need to build some kind of integration, or move your data to GCP (which is always going to be a headache). The hidden costs of innovation These types of extras are really important to consider when it comes to leveraging exciting cloud features. Yes you need to use a pricing calculator and spend time comparing platforms, but factoring additional development time to build integrations or move things is something that a calculator clearly can’t account for. Indeed, this is true in the context of both machine learning and serverless. The organizational implications of your purchases are perhaps the most important consideration and one that’s often the easiest to miss. Control the scope and empower your team However, although the organizational implications aren’t necessarily problems to be resolved - they could well be opportunities that you need to embrace. You need to prepare and be ready for those changes. Ultimately, preparation is key when it comes to leveraging the benefits of cloud. Defining the scope is critical and to do that you need to understand what your needs are and where you want to get to. That sounds obvious, but it’s all too easy to fall into the trap of focusing on the possibilities and opportunities of cloud without paying careful consideration to how to ensure it works for you. Read the results of Skill Up 2019. Download the report here.
Read more
  • 0
  • 0
  • 3380

article-image-what-elon-musk-can-teach-us-about-futurism-technology-forecasting
Craig Wing
07 Jun 2019
14 min read
Save for later

What Elon Musk can teach us about Futurism & Technology Forecasting

Craig Wing
07 Jun 2019
14 min read
Today, you can’t build a resilient business without robust technology forecasting. If you want to future-proof your business and ensure that it’s capable of adapting to change, looking ahead to the future in a way that’s both methodical and thoughtful is vital. There are no shortage of tales that attest to this fact. Kodak and Blackberry are two of the best known examples, but one that lingers in my mind is Nokia. This is a guest post by Craig Wing, futurist and speaker working at the nexus of leadership, strategy, exponential organizations and corporate culture. Follow Craig on Twitter @wingnuts123 or connect with him on LinkedIn here. Nokia’s failure to forecast the future When it was acquired by Microsoft back in 2013, Nokia was worth 2.9% of its market cap high of $250 billion. Back then, in the year 2000, it held a 30.6 market share in the mobile market - 17.3% more than Motorola. In less than two decades it had gone from an organization widely regarded as a pinnacle of both engineering and commercial potency, to one that was complacent, blithely ignoring the reality of an unpredictable future that would ultimately lead to its demise. “We didn’t do anything wrong” Nokia CEO Stephen Elop said in a press conference just before the company was acquired by Microsoft, “but somehow we still lost.” Although it’s hard not to sympathize with Elop, his words nevertheless bring to mind something Bill Gates said: “Success is a lousy teacher, it seduces smart people into thinking they can’t lose.” But what should you do to avoid complacency? Focus on the process of thinking, not its content Unfortunately, it’s not as straightforward as simply looking forward to the trends and changes that appear to be emerging on the horizon. That’s undoubtedly important, and it’s something you certainly should be doing, but again this can cause a new set of problems. You could be the most future-focused business leader on the planet, but if all you’re focused on is what’s going to be happening rather than why it is - and, more importantly, why it’s relevant to you - you’re going to eventually run into the same sort of problems as Nokia. This is a common problem I’ve noticed with many clients in many different industries across the globe. There is a recurring tendency to be passive in the face of the future. Instead of seeing it as something they can create and shape in a way that’s relevant to them, they see it as a set of various trends and opportunities that may or may not impact their organisations. They’re always much more interested in what they should be thinking about rather than how they should be thinking. This is particularly true for those who have a more deterministic view, where they believe everything is already planned out - that type of thinking can be dangerous as well as a little pessimistic. It’s almost as if you’re admitting you have no ability to influence the future. For the rest of this post I’m going to show you new forecasting techniques for thinking about the future. While I’m primarily talking about technology forecasting, these forecasting techniques can be applied to many different domains. You might find them useful for thinking about the future of your business more generally. How to rethink technology forecasting and planning for the future Look backwards from the future The cone of possibility The cone of possibility is a common but flawed approach to forecasting. Essentially it extrapolates the future from historical fact. It’s a way of thinking that says this is what’s happening now, which means we can assume this is going to happen in the future. While this may seem like a common sense approach, it can cause problems. At the most basic level, it can be easy to make mistakes - when you use the present as a cue to think about the future, there’s a big chance that your perspective will in someway be limited. Your understanding of something might well appear sound, but perhaps there’s an important bit of context that’s missing from your analysis. But there are other issues with this approach, too: The cone of possibility approach misses the ‘why’ behind events and developments. It puts you in a place where you’re following others, almost as if you’re trying to keep up with your neighbors, which, in turn, means you only understand the surface elements of a particular trend rather than the more sophisticated drivers behind it. Nokia had amassed a market lead with its smartphones based on the Symbian operating system, only to lose out to Apple’s touchscreen iPhone. This is a great example of a company failing to understand the “why” behind a trend - that customers wanted a new way to interact with their devices that went beyond the traditional keyboard. It’s also an approach that means you’ll always be playing catch up. You can bet that the largest organizations are months, if not years, ahead of you in the R&D stakes, which means actually building for the future becomes a game that’s set by market leaders. It’s no longer one that you’re in charge of. The thrust of impossibility However, there is an alternative - something that I call the thrust of impossibility. To properly understand the concept of the thrust of impossibility, it’s essential to appreciate the fact that the future isn’t determined. Yes there are known knowns from which we can extrapolate future events, but there are also known unknowns and unknown unknowns that are beyond our control. This isn’t something that should scare you, but it can instead be something you can use to your advantage. If we follow the cone of possibility, the market would almost continue in its current state, right? It works by looking backwards from a fixed point in the future. From this perspective, it is a more imaginative approach that requires us to expand the limits of what we believe is possible and then understand the route by which that end point can be reached. This process of ‘future mapping’ frees us from the “cone of possibility” and the boundary conditions and allow us to conceptualize a plethora of opportunities. I like to think of this as creating memories from the future. In more practical terms, it allows us to recalibrate our current position according to where we want to be. The benefit of this is that this form of technology forecasting gives direction to our current business strategy. It also allows us to amend our current trajectory if it appears to be doomed for failure by showing how far off we actually are. A good example of this approach to the future can be seen in Elon Musk’s numerous businesses. Viewed through the cone of possibility, his portfolio of companies don’t really make sense: Tesla, Solar City, SpaceX, The Boring Company – none fit within the framework of the cone. However, when viewed backwards from the “thrust of impossibility” – we can easily see how these seemingly disparate pieces link together as part of a grander vision. A lesson from conservation: pay attention to risk Another way of thinking about the future and technology forecasting can be illustrated by a problem currently facing my native South Africa - rhinoceros poaching. Nearly 80% of the world’s rhinos live in South Africa; the country has been hit hard by poachers criminals, with more than 1,000 rhinos killed each year between 2013 and 2017 (approximately 3 per day). [caption id="attachment_28268" align="alignright" width="300"] via savetherhino.org[/caption] Due to the severity of the situation, there are a number of possible interventions that authorities are using to curb the slaughter. Many involve the tracking of the rhino themselves and then deploying trackers and game rangers to protect them. However, the problem with this approach is that if the systems that monitor the geo-location of the rhinos are infiltrated, the hackers will then know the exact locale of the endangered species. Poachers can then use this defensive methodology to their own advantage. The alternative... As an alternative, progressive game farms realised they could monitor “early sensors” in the savanna by tracking other animals that would flee in the presence of poachers. These animals, like zebras, giraffes, and springbok, are of little value to poachers, but would scatter in their presence. By monitoring the movements of these “early detection” herds, conservationists were better able to not only track the presence of poachers in the vicinity of rhinos’ but their general movement. These early, seemingly vastly different, sensor animals are ones that poachers see no value in; but the conservationists (and rhinos) see immense value in their prediction systems. Likewise, for leaders we need to ensure we have the sensors which are able to orient us to the danger of our current reality. When we monitor only our “rhinos,” we as conservationists may actually be doing more harm by releasing early indicators into the competitive marketplace or causing us to be myopic in our approach of hedging up our businesses. The sensors we select must be outside of our field of expertise (like the different game animals) lest we, like the conservationists, seek a solution from only one particular vantage point. Think about the banking sector: if they selected sensors who only view the financial sector, they would likely have missed the rise of mobile payments and cryptocurrencies. Not only must these sensors be outside of our domain but they also must be able to explore and partner with other companies along the journey. By the nature of their selection, they should not be experts in that domain, but they should be able to provoke and question the basis of decisions from first principles thinking. By doing this you are effectively enlarging the cone of possibility, creating insights into known unknowns and unknown unknowns. This is very different to the way consultants are used today. Technology consultants are expected to know the what of the future and draft appropriate strategies, without necessarily focusing on the broader context surrounding a clients needs (well, they should do that, but many do not…). In turn, this approach implies consultants must draft something different from the current approach, and likely follow an approach constrained by the cone of possibility originating from the client’s initial conditions. Technology forecasting becomes something passive, starting from a fixed point. Don't just think about segments - think about them dynamically Many of the business tools taught in business schools today, such as SWOT, PESTLE, Porter’s five forces, are sufficient at mapping current market conditions (magnitude) but are unable to account for the forward direction of travel and changing markets. They offer snapshots, and provide a foundation for vector thinking but they lack the dynamism required to help us manage change over a sustained period of time. In the context of today's fast moving world, this makes technology forecasting and strategic planning very difficult. This means we need to consider the way plans - and the situations they’re meant to help us navigate - can shift and change, to give us the ability to pivot based on market conditions. How do we actually do this? Well, we need to think carefully about the ‘snapshots’ that form the basis of our analysis. For example, the time they are taken, how frequently they are taken will impact how helpful they are for formulating a more coherent long term strategy. Strategies and plans that are only refreshed annually will yield an imperfect view of the total cone of possibility. Moreover, while quarterly plans will yield greater resolution images, these are still not sufficient in market places that are accelerating faster. Indeed, it might sound like a nightmare to have business leaders tweaking plans constantly - and it is! The practical steps are instead to decentralise control away from central planning offices and allow those who are actually executing on the strategy the freedom to move with haste to meet customer demands and address shifting market conditions. Trust those closest to problems, and trust those closest to customers to set and revise plans accordingly - but make sure there are clear communication channels so leadership understands what is happening. In the context of technology and software engineering, this maps on nicely to ideas around Agile and Lean - by building teams that are more autonomous and closely connected to the products and services they are developing, change can happen much more quickly, ensuring you can adapt to change in the market. Quantum business: remember that you’re dead and alive at the same time Quantum theory has been attracting a lot of attention over the last few years. Perhaps due in part to The Big Bang Theory, and maybe even the more recent emergence of quantum computing, the idea that a cat can be both dead and alive at the same time depending on the fact of our observing it (as Schrodinger showed in his famous thought experiment), is one that is simultaneously perplexing, intriguing, and even a little bit amusing. The concept actually has a lot of value for businesses thinking about the future. Indeed, it's an idea that complements technology forecasting. This is because in an increasingly connected world, the various dependencies that exist across value chains, customer perceptions, and social media ecosystems means that, like Schrodinger’s cat, we cannot observe part of a system without interfering with it in some way. If we accept that premise, then we must also accept that ultimately the way we view (and then act on) the market will, subsequently, affect the entire market as well. While very few businesses have the resources of Elon Musk, what’s remarkable is that he has managed to shift the entire auto-manufacturing sector from the internal combustion engine to electric. He’s done this by doing much more than simply releasing various Tesla vehicles (Toyota and others had a greater lead time); he’s managed to redefine the entire sector through autonomous manufacturing, Gigafactory battery centres, and “crowdsourced” marketing, among other innovations. Try as they might, the established players will never be able to turn back the clock. This is the new normal. As mentioned earlier, Nokia missed the entire touch screen revolution initiated by Apple in 2008. In the same year, Google launched the Android operating system. Nokia profits plummeted by 30%, while sales decreased 3.1%. Meanwhile iPhone sales grew by 330%. The following year (2009), as a result of the changing marketplace and unable to keep pace with these two new entrants, Nokia reduced its workforce by 1,700 employees. It finally realized it was too slow to react to changing shifting dynamics - the cat’s state of being was now beyond its own control – and Nokia was surpassed by Apple, Blackberry and new non-traditional players like Samsung, HTC and LG. Nokia is not the only giant to be dethroned, the average time spent by a company in the S&P500 has dropped from 33 years in 1965 to 20 years in 1990 and only 14 years by 2026. Half will be gone in 10 years. Further, only 12% of the Fortune 500 remain after 61 years. The remaining 88% have either gone bankrupt, merged or acquired or simply fallen off the list. From 91 companies (revenue over $1 billion) across more than 20 industries, executives were asked: "What is your organization's biggest obstacle to transform in response to market change and disruption?" Forty percent cited "day-to-day decisions" that essentially pay the bill but "undermine our stated strategy to change." Herein lies the biggest challenge for leaders in a quantum business world: your business is simultaneously dead and alive at any given time. Every day, you as a leader make decisions to decide if it lives or dies. If you decide not to, your competitors are making the same decisions and every individual decision cumulatively adds to the entire system being shifted. Put simply, in a quantum world where everything is connected, and where ambivalence appears to rule, decision making is crucial - it forms the foundations from which more forward thinking technology forecasting can take shape. If you don’t put the care and attention into the strategic decisions you make - and the analysis on which all smart ones depend - you fall into a trap where you’re at the mercy of unpredictability. And no business should be the victim of chance.
Read more
  • 0
  • 0
  • 3208

article-image-worried-about-deepfakes-check-out-the-new-algorithm-that-manipulate-talking-head-videos-by-altering-the-transcripts
Vincy Davis
07 Jun 2019
6 min read
Save for later

Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts

Vincy Davis
07 Jun 2019
6 min read
Last week, a team of researchers from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research published a paper titled “Text-based Editing of Talking-head Video”. This paper proposes a method to edit a talking-head video based on its transcript to produce a realistic output video, in which the dialogue of the speaker has been modified. Basically, the editor modifies a video using a text transcript, to add new words, delete unwanted ones or completely rearrange the pieces by dragging and dropping. This video will maintain a seamless audio-visual flow, without any jump cuts and will look almost flawless to the untrained eye. The researchers want this kind of text-based editing approach to lay the foundation for better editing tools, in post production of movies and television. Actors often botch small bits of performance or leave out a critical word. This algorithm can help video editors fix that, which has until now involves expensive reshoots. It can also help in easy adaptation of audio-visual video content to specific target audiences. The tool supports three types of edit operations- add new words, rearrange existing words, delete existing words. Ohad Fried, a researcher in the paper says that “This technology is really about better storytelling. Instructional videos might be fine-tuned to different languages or cultural backgrounds, for instance, or children’s stories could be adapted to different ages.” https://youtu.be/0ybLCfVeFL4 How does the application work? The method uses an input talking-head video and a transcript to perform text-based editing. The first step is to align phonemes to the input audio and track each input frame to construct a parametric head model. Next, a 3D parametric face model with each frame of the input talking-head video is registered. This helps in selectively blending different aspects of the face. Then, a background sequence is selected and is used for pose data and background pixels. The background sequence allows editors to edit challenging videos with hair movement and slight camera motion. As Facial expressions are an important parameter, the researchers have tried to preserve the retrieved expression parameters as much as possible, by smoothing out the transition between them. This provides an output of edited parameter sequence which describes the new desired facial motion and a corresponding retimed background video clip. This is forwarded to a ‘neural face rendering’ approach. This step changes the facial motion of the retimed background video to match the parameter sequence. Thus the rendering procedure produces photo-realistic video frames of the subject, appearing to speak the new phrase.These localized edits seamlessly blends into the original video, producing an edited result. Lastly to add the audio, the resulted video is retimed to match the recording at the level of phones. The researchers have used the performers own voice in all their synthesis results. Image Source: Text-based Editing of Talking-head Video The researchers have tested the system with a series of complex edits including adding, removing and changing words, as well as translations to different languages. When the application was tried in a crowd-sourced study with 138 participants, the edits were rated as “real”, almost 60% of the time. Fried said that “The visual quality is such that it is very close to the original, but there’s plenty of room for improvement.” Ethical considerations: Erosion of truth, confusion and defamation Even though the application is quite useful for video editors and producers, it raises important and valid concerns about its potential for misuse. The researchers have also agreed that such a technology might be used for illicit purposes. “We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals. We are concerned about such deception and misuse.” They have recommended certain precautions to be taken to avoid deception and misuse such as using watermarking. “The fact that the video is synthesized may be obvious by context, directly stated in the video or signaled via watermarking. We also believe that it is essential to obtain permission from the performers for any alteration before sharing a resulting video with a broad audience.” They urge the community to continue to develop forensics, fingerprinting and verification techniques to identify manipulated video. They also support the creation of appropriate regulations and laws that would balance the risks of misuse of these tools against the importance of creative, consensual use cases. The public however remain dubious pointing out valid arguments on why the ‘Ethical Concerns’ talked about in the paper, fail. A user on Hacker News comments, “The "Ethical concerns" section in the article feels like a punt. The author quoting "this technology is really about better storytelling" is aspirational -- the technology's story will be written by those who use it, and you can bet people will use this maliciously.” https://twitter.com/glenngabe/status/1136667296980701185 Another user feels that such kind of technology will only result in “slow erosion of video evidence being trustworthy”. Others have pointed out how the kind of transformation mentioned in the paper, does not come under the broad category of ‘video-editing’ ‘We need more words to describe this new landscape’ https://twitter.com/BrianRoemmele/status/1136710962348617728 Another common argument is that the algorithm can be used to generate terrifyingly real Deepfake videos. A Shallow Fake video was Nancy Pelosi’s altered video, which circulated recently, that made it appear she was slurring her words by slowing down the video. Facebook was criticized for not acting faster to slow the video’s spread. Not just altering speeches of politicians, altered videos like these can also, for instance, be used to create fake emergency alerts, or disrupt elections by dropping a fake video of one of the candidates before voting starts. There is also the issue of defaming someone on a personal capacity. Sam Gregory, Program Director at Witness, tweets that one of the main steps in ensuring effective use of such tools would be to “ensure that any commercialization of synthetic media tools has equal $ invested in detection/safeguards as in detection.; and to have a grounded conversation on trade-offs in mitigation”. He has also listed more interesting recommendations. https://twitter.com/SamGregory/status/1136964998864015361 For more details, we recommend you to read the research paper. OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 4266
article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 3327

article-image-containers-and-python-are-in-demand-but-blockchain-is-all-hype-says-skill-up-developer-survey
Richard Gall
05 Jun 2019
4 min read
Save for later

Containers and Python are in demand, but Blockchain is all hype, says Skill Up developer survey

Richard Gall
05 Jun 2019
4 min read
For the last 4 years at Packt we've been running Skill Up - a survey that aims to capture everything that's important to the developer world when it comes to work and learning. Today we've published the results of our 2019 survey. In it, you'll find a wealth of insights based on data from more than 4,500 respondents from 118 countries. Key findings in Packt's 2019 developer survey Over the next few weeks we'll be doing a deeper dive into some of the issues raised. But before we get started, below are are some of the key findings and takeaways from this year's report. Some confirm assumptions about the tech industry that have been around for some time while others might actually surprise you... Python remains the most in-demand programming language This one wasn't that surprising - Python's popularity has come through in every Skill Up since 2015. But what was interesting is that this year's findings were able to illustrate that Python's popularity isn't confined to a specific group - across age groups, salary bands and even developers using different primary programming languages, Python is regarded as a vital part of the software engineers toolkit. Containerization is impacting the way all developers work We know containers are popular. Docker has been a core part of the engineering landscape for the last half a decade or so. But this year's Skill Up survey not only confirms that fact, it also highlights that the influence of containerization is far-reaching. Read next: 6 signs you need containers This could well indicate that the gap between development and deployment is getting smaller, with developers today more likely than ever to be accountable for how their code actually runs in production. As one respondent told us, "I want to become more well-rounded, and I believe enhancing my DevOps arsenal is a great way to start." Not everyone is using cloud Cloud is a big change for the software industry. But we should be cautious about overestimating the extent to which it is actually being used by developers - in this year's survey 47% of respondents said they don't use any cloud platforms. Perhaps we shouldn't be that surprised - many respondents are working in areas like government and healthcare that require strict discipline when it comes to privacy and data protection and are (not unrelatedly) known for being a little slow to adopt emerging technology trends. Similarly, the growth of the PaaS market means that many developers and other technology professionals are using cloud based products alongside their work, rather than developing in a way that is strictly 'cloud-native'. Almost half of all developers spend time learning every day Learning is an essential part of what it means to be a developer. In this year's survey we saw what that means in practice with around 50% of respondents telling us that they spend time learning every single day. A further 30% also said they spend time at least once a week learning something. This leaves us wondering - what the hell is everyone else doing if they're not learning? As the graph above highlights, those in the lowest and highest salary bands are most likely to spend time learning every day. Java is the programming language developers are most likely to regret learning When we asked respondents what tools they regret learning, many said they didn't regret anything. However, for those that do have regrets, Java was the tool that was mentioned the most. There are a numbert of reasons for this, but Oracle’s decision to focus on enterprise Java and withdrawing support for OpenJDK is undoubtedly important in creating a degree of uncertainty around the language. Among those that said they regret learning Java there is a sense that the language is simply going out of date. One respondent called it "the COBOL of modern programming." Blockchain is over hyped and failing to deliver on expectations It has long been suspected that Blockchain is being overhyped - and now we can confirm that feeling across developers, with 38% saying it has failed to deliver against expectations over the last 12 months.   One respondent told us that they "couldn’t get any gigs despite building blockchain apps" suggesting that despite capitals' apparent hunger for all things Blockchain, the market isn't quite as big as the hype-merchants would have us believe. We'll be throwing the spotlight on these issues and many more over the next few weeks. So make sure you check the Packt Hub for more insights and updates. In the meantime, you can read the report in full by downloading it here.
Read more
  • 0
  • 0
  • 5159