Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Mobile

204 Articles
article-image-virtual-reality-solar-system-unity-google-cardboard
Sugandha Lahoti
25 Apr 2018
21 min read
Save for later

Build a Virtual Reality Solar System in Unity for Google Cardboard

Sugandha Lahoti
25 Apr 2018
21 min read
In today's tutorial, we will feature visualization of a newly discovered solar system. We will leverage the virtual reality development process for this project in order to illustrate the power of VR and ease of use of the Unity 3D engine. This project is dioramic scene, where the user floats in space, observing the movement of planets within the TRAPPIST-1 planetary system. In February 2017, astronomers announced the discovery of seven planets orbiting an ultra-cool dwarf star slightly larger than Jupiter. We will use this information to build a virtual environment to run on Google Cardboard (Android and iOS) or other compatible devices: We will additionally cover the following topics: Platform setup: Download and install the platform-specific software needed to build an application on your target device. Experienced mobile developers with the latest Android or iOS SDK may skip this step. Google Cardboard setup: This package of development tools facilitates display and interaction on a Cardboard device. Unity environment setup: Initializing Unity's Project Settings in preparation for a VR environment. Building the TRAPPIST-1 system: Design and implement the Solar System project. Build for your device: Build and install the project onto a mobile device for viewing in Google Cardboard. Platform setup Before we begin building the solar system, we must setup our computer environment to build the runtime application for a given VR device. If you have never built a Unity application for Android or iOS, you will need to download and install the Software Development Kit (SDK) for your chosen platform. An SDK is a set of tools that will let you build an application for a specific software package, hardware platform, game console, or operating system. Installing the SDK may require additional tools or specific files to complete the process, and the requirements change from year to year, as operating systems and hardware platforms undergo updates and revisions. To deal with this nightmare, Unity maintains an impressive set of platform-specific instructions to ease the setup process. Their list contains detailed instructions for the following platforms: Apple Mac Apple TV Android iOS Samsung TV Standalone Tizen Web Player WebGL Windows For this project, we will be building for the most common mobile devices: Android or iOS. The first step is to visit either of the following links to prepare your computer: Android: Android users will need the Android Developer Studio, Java Virtual Machine (JVM), and assorted drivers. Follow this link for installation instructions and files: https://docs.unity3d.com/Manual/Android-sdksetup.html. Apple iOS: iOS builds are created on a Mac and require an Apple Developer account, and the latest version of Xcode development tools. However, if you've previously built an iOS app, these conditions will have already been met by your system. For the complete instructions, follow this link: https://docs.unity3d.com/Manual/iphone-GettingStarted.html. Google Cardboard setup Like the Unity documentation website, Google also maintains an in-depth guide for the Google VR SDK for Unity set of tools and examples. This SDK provides the following features on the device: User head tracking Side-by-side stereo rendering Detection of user interactions (via trigger or controller) Automatic stereo configuration for a specific VR viewer Distortion correction Automatic gyro drift correction These features are all contained in one easy-to-use package that will be imported into our Unity scene. Download the SDK from the following link, before moving on to the next step: http://developers.google.com/cardboard/unity/download. At the time of writing, the current version of the Google VR SDK for Unity is version 1.110.1 and it is available via a GitHub repository. The previous link should take you to the latest version of the SDK. However, when starting a new project, be sure to compare the SDK version requirements with your installed version of Unity. Setting up the Unity environment Like all projects, we will begin by launching Unity and creating a new project. The first steps will create a project folder which contains several files and directories: Launch the Unity application. Choose the New option after the application splash screen loads. Create a new project by launching the Unity application. Save the project as Trappist1 in a location of your choice, as demonstrated in Figure 2.2: To prepare for VR, we will adjust the Build Settings and Player Settings windows. Open Build Settings from File | Build Settings. Select the Platform for your target device (iOS or Android). Click the Switch Platform button to confirm the change. The Unity icon in the right-hand column of the platform panel indicates the currently selected build platform. By default, it will appear next to the Standalone option. After switching, the icon should now be on Android or iOS platform, as shown in Figure 2.3: Note for Android developers: Ericsson Texture Compression (ETC) is the standard texture compression format on Android. Unity defaults to ETC (default), which is supported on all current Android devices, but it does not support textures that have an alpha channel. ETC2 supports alpha channels and provides improved quality for RBG textures on Android devices that support OpenGL ES 3.0. Since we will not need alpha channels, we will stick with ETC (default) for this project: Open the Player Settings by clicking the button at the bottom of the window. The PlayerSetting panel will open in the Inspector panel. Scroll down to Other Settings (Unity 5.5 thru 2017.1) or XR Settings and check the Virtual Reality Supported checkbox. A list of choices will appear for selecting VR SDKs. Add Cardboard support to the list, as shown in Figure 2.4: You will also need to create a valid Bundle Identifier or Package Name under Identification section of Other Settings. The value should follow the reverse-DNS format of the com.yourCompanyName.ProjectName format using alphanumeric characters, periods, and hyphens. The default value must be changed in order to build your application. Android development note: Bundle Identifiers are unique. When an app is built and released for Android, the Bundle Identifier becomes the app's package name and cannot be changed. This restriction and other requirements are discussed in this Android documentation link: http://developer.Android.com/reference/Android/content/pm/PackageInfo.html. Apple development note: Once you have registered a Bundle Identifier to a Personal Team in Xcode, the same Bundle Identifier cannot be registered to another Apple Developer Program team in the future. This means that, while testing your game using a free Apple ID and a Personal Team, you should choose a Bundle Identifier that is for testing only, you will not be able to use the same Bundle Identifier to release the game. An easy way to do this is to add Test to the end of whatever Bundle Identifier you were going to use, for example, com.MyCompany.VRTrappistTest. When you release an app, its Bundle Identifier must be unique to your app, and cannot be changed after your app has been submitted to the App Store. Set the Minimum API Level to Android Nougat (API level 24) and leave the Target API on Automatic. Close the Build Settings window and save the project before continuing. Choose Assets | Import Package | Custom Package... to import the GoogleVRForUnity.unitypackage previously downloaded from http://developers.google.com/cardboard/unity/download. The package will begin decompressing the scripts, assets, and plugins needed to build a Cardboard product. When completed, confirm that all options are selected and choose Import. Once the package has been installed, a new menu titled GoogleVR will be available in the main menu. This provides easy access to the GoogleVR documentation and Editor Settings. Additionally, a directory titled GoogleVR will appear in the Project panel: Right-click in the Project and choose Create | Folder to add the following directories: Materials, Scenes, and Scripts. Choose File | Save Scenes to save the default scene. I'm using the very original Main Scene and saving it to the Scenes folder created in the previous step. Choose File | Save Project from the main menu to complete the setup portion of this project. Building the TRAPPIST-1 System Now that we have Unity configured to build for our device, we can begin building our space themes VR environment. We have designed this project to focus on building and deploying a VR experience. If you are moderately familiar with Unity, this project will be very simple. Again, this is by design. However, if you are relatively new, then the basic 3D primitives, a few textures, and a simple orbiting script will be a great way to expand your understanding of the development platform: Create a new script by selecting Assets | Create | C# Script from the main menu. By default, the script will be titled NewBehaviourScript. Single click this item in the Project window and rename it OrbitController. Finally, we will keep the project organized by dragging OrbitController's icon to the Scripts folder. Double-click the OrbitController script item to edit it. Doing this will open a script editor as a separate application and load the OrbitController script for editing. The following code block illustrates the default script text: using System.Collections; using System.Collections.Generic; using UnityEngine; public class OrbitController : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } } This script will be used to determine each planet's location, orientation, and relative velocity within the system. The specific dimensions will be added later, but we will start by adding some public variables. Starting on line 7, add the following five statements: public Transform orbitPivot; public float orbitSpeed; public float rotationSpeed; public float planetRadius; public float distFromStar; Since we will be referring to these variables in the near future, we need a better understanding of how they will be used: orbitPivot stores the position of the object that each planet will revolve around (in this case, it is the star TRAPPIST-1). orbitalSpeed is used to control how fast each planet revolves around the central star. rotationSpeed is how fast an object rotates around its own axis. planetRadius represents a planet's radius compared to Earth. This value will be used to set the planet's size in our environment. distFromStar is a planet's distance in Astronomical Units (AU) from the central star. Continue by adding the following lines of code to the Start() method of the OrbitController script: // Use this for initialization void Start () { // Creates a random position along the orbit path Vector2 randomPosition = Random.insideUnitCircle; transform.position = new Vector3 (randomPosition.x, 0f, randomPosition.y) * distFromStar; // Sets the size of the GameObject to the Planet radius value transform.localScale = Vector3.one * planetRadius; } As shown within this script, the Start() method is used to set the initial position of each planet. We will add the dimensions when we create the planets, and this script will pull those values to set the starting point of each game object at runtime: Next, modify the Update() method by adding two additional lines of code, as indicated in the following code block: // Update is called once per frame. This code block updates the Planet's position during each // runtime frame. void Update () { this.transform.RotateAround (orbitPivot.position, Vector3.up, orbitSpeed * Time.deltaTime); this.transform.Rotate (Vector3.up, rotationSpeed * Time.deltaTime); } This method is called once per frame while the program is running. Within Update(), the location for each object is determined by computing where the object should be during the next frame. this.transform.RotateAround uses the sun's pivot point to determine where the current GameObject (identified in the script by this) should appear in this frame. Then this.transform.Rotate updates how much the planet has rotated since the last frame. Save the script and return to Unity. Now that we have our first script, we can begin building the star and its planets. For this process, we will use Unity's primitive 3D GameObject to create the celestial bodies: Create a new sphere using GameObject | 3D Object | Sphere. This object will represent the star TRAPPIST-1. It will reside in the center of our solar system and will serve as the pivot for all seven planets. Right-click on the newly created Sphere object in the Hierarchy window and select Rename. Rename the object Star. Using the Inspector tab, set the object to Position: 0,0,0 and Scale: 1,1,1. With the Star selected, locate the Add Component button in the Inspector panel. Click the button and enter orbitcontroller in the search box. Double-click on the OrbitController script icon when it appears. The script is now a component of the star. Create another sphere using GameObject | 3D Object | Sphere and position it anywhere in the scene, with the default scale of 1,1,1. Rename the object Planet b. Figure 2.5, from the TRAPPIST-1 Wikipedia page, shows the relative orbital period, distance from the star, radius, and mass of each planet. We will use these dimensions and names to complete the setup of our VR environment. Each value will be entered as public variables for their associated GameObjects: Apply the OrbitController script to the Planet b asset by dragging the script icon to the planet in the Scene window or the Planet b object in the Hierarchy window. Planet b is our first planet and it will serve as a prototype for the rest of the system. Set the Orbit Pivot point of Planet b in the Inspector. Do this by clicking the Selector Target next to the Orbit Pivot field (see Figure 2.6). Then, select Star from the list of objects. The field value will change from None (Transform) to Star (Transform). Our script will use the origin point of the select GameObject as its pivot point. Go back and select the Star GameObject and set the Orbit Pivot to Star as we did with Planet b. Save the scene: Now that our template planet has the OrbitController script, we can create the remaining planets: Duplicate the Planet b GameObject six times, by right-clicking on it and choosing Duplicate. Rename each copy Planet c through Planet h. Set the public variables for each GameObject, using the following chart: GameObject Orbit Speed Rotation Speed Planet Radius Dist From Star Star 0 2 6 0 Planet b .151 5 0.85 11 Planet c .242 5 1.38 15 Planet d .405 5 0.41 21 Planet e .61 5 0.62 28 Planet f .921 5 0.68 37 Planet g 1.235 5 1.34 45 Planet h 1.80 5 0.76 60 Table 2.1: TRAPPIST-1 gameobject Transform settings Create an empty GameObject by right clicking in the Hierarchy panel and selecting Create Empty. This item will help keep the Hierarchy window organized. Rename the item Planets and drag Planet b—through Planet h into the empty item. This completes the layout of our solar system, and we can now focus on setting a location for the stationary player. Our player will not have the luxury of motion, so we must determine the optimal point of view for the scene: Run the simulation. Figure 2.7 illustrates the layout being used to build and edit the scene. With the scene running and the Main Camera selected, use the Move and Rotate tools or the Transform fields to readjust the position of the camera in the Scene window or to find a position with a wide view of the action in the Game window; or a position with an interesting vantage point. Do not stop the simulation when you identify a position. Stopping the simulation will reset the Transform fields back to their original values. Click the small Options gear in the Transform panel and select Copy Component. This will store a copy of the Transform settings to the clipboard: Stop the simulation. You will notice that the Main Camera position and rotation have reverted to their original settings. Click the Transform gear again and select Paste Component Values to set the Transform fields to the desired values. Save the scene and project. You might have noticed that we cannot really tell how fast the planets are rotating. This is because the planets are simple spheres without details. This can be fixed by adding materials to each planet. Since we really do not know what these planets look like we will take a creative approach and go for aesthetics over scientific accuracy. The internet is a great source for the images we need. A simple Google search for planetary textures will result in thousands of options. Use a collection of these images to create materials for the planets and the TRAPPIST-1 star: Open a web browser and search Google for planet textures. You will need one texture for each planet and one more for the star. Download the textures to your computer and rename them something memorable (that is, planet_b_mat...). Alternatively, you can download a complete set of textures from the Resources section of the supporting website: http://zephyr9.pairsite.com/vrblueprints/Trappist1/. Copy the images to the Trappist1/Assets/Materials folder. Switch back to Unity and open the Materials folder in the Project panel. Drag each texture to its corresponding GameObject in the Hierarchy panel. Notice that each time you do this Unity creates a new material and assigns it to the planet GameObject: Run the simulation again and observe the movement of the planets. Adjust the individual planet Orbit Speed and Rotation Speed to feel natural. Take a bit of creative license here, leaning more on the scene's aesthetic quality than on scientific accuracy. Save the scene and the project. For the final design phase, we will add a space themed background using a Skybox. Skyboxes are rendered components that create the backdrop for Unity scenes. They illustrate the world beyond the 3D geometry, creating an atmosphere to match the setting. Skyboxes can be constructed of solids, gradients, or images using a variety of graphic programs and applications. For this project, we will find a suitable component in the Asset Store: Load the Asset Store from the Window menu. Search for a free space-themed skybox using the phrase space skybox price:0. Select a package and use the Download button to import the package into the Scene. Select Window | Lighting | Settings from the main menu. In the Scene section, click on the Selector Target for the Skybox Material and choose the newly downloaded skybox: Save the scene and the project. With that last step complete, we are done with the design and development phase of the project. Next, we will move on to building the application and transferring it to a device. Building the application To experience this simulation in VR, we need to have our scene run on a head-mounted display as a stereoscopic display. The app needs to compile the proper viewing parameters, capture and process head tracking data, and correct for visual distortion. When you consider the number of VR devices we would have to account for, the task is nothing short of daunting. Luckily, Google VR facilitates all of this in one easy-to-use plugin. The process for building the mobile application will depend on the mobile platform you are targeting. If you have previously built and installed a Unity app on a mobile device, many of these steps will have already been completed, and a few will apply updates to your existing software. Note: Unity is a fantastic software platform with a rich community and an attentive development staff. During the writing of this book, we tackled software updates (5.5 through 2017.3) and various changes in the VR development process. Although we are including the simplified building steps, it is important to check Google's VR documentation for the latest software updates and detailed instructions: Android: https://developers.google.com/vr/unity/get-started iOS: https://developers.google.com/vr/unity/get-started-ios Android Instructions If you are just starting out building applications from Unity, we suggest starting out with the Android process. The workflow for getting your project export from Unity to playing on your device is short and straight forward: On your Android device, navigate to Settings | About phone or Settings | About Device | Software Info. Scroll down to Build number and tap the item seven times. A popup will appear, confirming that you are now a developer. Now navigate to Settings | Developer options | Debugging and enable USB debugging. Building an Android application In your project directory (at the same level as the Asset folder), create a Build folder. Connect your Android device to the computer using a USB cable. You may see a prompt asking you to confirm that you wish to enable USB debugging on the device. If so, click OK. In Unity, select File | Build Settings to load the Build dialog. Confirm that the Platform is set to Android. If not choose Android and click Switch Platform. Note that Scenes/Main Scene should be loaded and checked in the Scenes In Build portion of the dialog. If not, click the Add Open Scenes button to add Main Scene to the list of scenes to be included in the build. Click the Build button. This will create an Android executable application with the .apk file extension. Invalid command Android error Some Android users have reported an error relating to the Android SDK Tools location. The problem has been confirmed in many installations prior to Unity 2017.1. If this problem occurs, the best solution is to downgrade to a previous version of the SDK Tools. This can be done by following the steps outlined here: Locate and delete the Android SDK Tools folder [Your Android SDK Root]/tools. This location will depend on where the Android SDK package was installed. For example, on my computer the Android SDK Tools folder is found at C:UserscpalmerAppDataLocalAndroidsdk. Download SDK Tools from http://dl-ssl.google.com/Android/repository/tools_r25.2.5-windows.zip. Extract the archive to the SDK root directory. Re-attempt the Build project process. If this is the first time you are creating an Android application, you might get an error indicating that Unity cannot locate your Android SDK root directory. If this is the case, follow these steps: Cancel the build process and close the Build Settings... window. Choose Edit | Preferences... from the main menu. Choose External Tools and scroll down to Android. Enter the location of your Android SDK root folder. If you have not installed the SDK, click the download button and follow the installation process. Install the app onto your phone and load the phone into your Cardboard device: iOS Instructions The process for building an iOS app is much more involved than the Android process. There are two different types of builds: Build for testing Build for distribution (which requires an Apple Developer License) In either case, you will need the following items to build a modern iOS app: A Mac computer running OS X 10.11 or later The latest version of Xcode An iOS device and USB cable An Apple ID Your Unity project For this demo, we will build an app for testing and we will assume you have completed the Getting Started steps (https://docs.unity3d.com/Manual/iphone-GettingStarted.html) from Section 1. If you do not yet have an Apple ID, obtain one from the Apple ID site (http://appleid.apple.com/). Once you have obtained an Apple ID, it must be added to Xcode: Open Xcode. From the menu bar at the top of the screen, choose Xcode | Preferences. This will open the Preferences window. Choose Accounts at the top of the window to display information about the Apple IDs that have been added to Xcode. To add an Apple ID, click the plus sign at the bottom left corner and choose Add Apple ID. Enter your Apple ID and password in the resulting popup box. Your Apple ID will then appear in the list. Select your Apple ID. Apple Developer Program teams are listed under the heading of Team. If you are using the free Apple ID, you will be assigned to Personal Team. Otherwise, you will be shown the teams you are enrolled in through the Apple Developer Program. Preparing your Unity project for iOS Within Unity, open the Build Settings from the top menu (File | Build Settings). Confirm that the Platform is set to iOS. If not choose iOS and click Switch Platform at the bottom of the window. Select the Build & Run button. Building an iOS application Xcode will launch with your Unity project. Select your platform and follow the standard process for building an application from Xcode. Install the app onto your phone and load the phone into your Cardboard device. We looked at the basic Unity workflow for developing VR experiences. We also provided a stationary solution so that we could focus on the development process. The Cardboard platform provides access to VR content from a mobile platform, but it also allows for touch and gaze controls. You read an excerpt from the book, Virtual Reality Blueprints, written by Charles Palmer and John Williamson. In this book, you will learn how to create compelling Virtual Reality experiences for mobile and desktop with three top platforms—Cardboard VR, Gear VR, and OculusVR. Read More Top 7 modern Virtual Reality hardware systems Virtual Reality for Developers: Cardboard, Gear VR, Rift, and Vive    
Read more
  • 0
  • 2
  • 10992

article-image-craftassist-an-open-source-framework-to-enable-interactive-bots-in-minecraft-by-facebook-researchers
Vincy Davis
19 Jul 2019
5 min read
Save for later

CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers

Vincy Davis
19 Jul 2019
5 min read
Two days ago, researchers from Facebook AI Research published a paper titled “CraftAssist: A Framework for Dialogue-enabled Interactive Agents”. The authors of this research are Facebook AI research engineers Jonathan Gray and Kavya Srinet, Facebook AI research scientist C. Lawrence Zitnick and Arthur Szlam and Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo and Siddharth Goyal. The paper describes the implementation of an assistant bot called CraftAssist which appears and interacts like another player, in the open sandbox game of Minecraft. The framework enables players to interact with the bot via in-game chat through various implemented tools and platforms. The players can also record these interactions through an in-game chat. The main aim of the bot is to be a useful and entertaining assistant to all the tasks listed and evaluated by the human players. Image Source: CraftAssist paper For motivating the wider AI research community to use the CraftAssist platform in their own experiments, Facebook researchers have open-sourced the framework, the baseline assistant, data and the models. The released data includes the functions which was used to build the 2,586 houses in Minecraft, the labeling data of the walls, roofs, etc. of the houses, human rephrasing of fixed commands, and the conversion of natural language commands to bot interpretable logical forms. The technology that allows the recording of human and bot interaction on a Minecraft server has also been released so that researcher will be able to independently collect data. Why is the Minecraft protocol used? Minecraft is a popular multiplayer volumetric pixel (voxel) 3D game based on building and crafting which allows multiplayer servers and players to collaborate and build, survive or compete with each other. It operates through a client and server architecture. The CraftAssist bot acts as a client and communicates with the Minecraft server using the Minecraft network protocol. The Minecraft protocol allows the bot to connect to any Minecraft server without the need for installing server-side mods. This lets the bot to easily join a multiplayer server along with human players or other bots. It also lets the bot to join an alternative server which implements the server-side component of the Minecraft network protocol. The CraftAssist bot uses a 3rd-party open source Cuberite server. It is a fast and extensible game server used for Minecraft. Read More: Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users How does the CraftAssist function? The block diagram below demonstrates how the bot interacts with incoming in-game chats and reaches the desired target. Image Source: CraftAssist paper Firstly, the incoming text is transformed into a logical form called the action dictionary. The action dictionary is then translated by a dialogue object which interacts with the memory module of the bot. This produces an action or a chat response to the user. The bot’s memory uses a relational database which is structured to recognize the relation between stored items of information. The major advantage of this type of memory is the easy to convert semantic parser, which is converted into a fully specified tasks. The bot responds to higher-level actions, called Tasks. Tasks are an interruptible process which follows a clear objective of step by step actions. It can adjust to long pauses between steps and can also push other Tasks onto a stack, like the way functions can call other functions in a standard programming language. Move, Build and Destroy are few of the many basic Tasks assigned to the bot. The The Dialogue Manager checks for illegal or profane words, then queries the semantic parser. The semantic parser takes the chat as input and produces an action dictionary. The action dictionary indicates that the text is a command given by a human and then specifies the high-level action to be performed by the bot. Once the task is created and pushed onto the Task stack, it is the responsibility of the command task ‘Move’ to compare the bot’s current location to the target location. This will make the bot to undertake a sequence of low-level step movements to reach the target. The core of the bot’s understanding of natural language depends on a neural semantic parser called the Text-toAction-Dictionary (TTAD) model. This model receives the incoming command/chat and then classifies it into an action dictionary which is interpreted by the Dialogue Object. The CraftAssist framework thus enables the bots in Minecraft to interact and play with players by understanding human interactions, using the implemented tools. The researchers hope that since the dataset of CraftAssist is now open-sourced, more developers will be empowered to contribute to this framework by assisting or training the bots, which might lead to the bots learning from human dialogue interactions, in the future. Developers have found the CraftAssist framework interesting. https://twitter.com/zehavoc/status/1151944917859688448 A user on Hacker News comments, “Wow, this is some amazing stuff! Congratulations!” Check out the paper CraftAssist: A Framework for Dialogue-enabled Interactive Agents for more details. Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects What to expect in Unreal Engine 4.23? A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news
Read more
  • 0
  • 0
  • 10362

article-image-introducing-opendrop-an-open-source-implementation-of-apple-airdrop-written-in-python
Vincy Davis
21 Aug 2019
3 min read
Save for later

Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python

Vincy Davis
21 Aug 2019
3 min read
A group of German researchers recently published a paper “A Billion Open Interfaces for Eve and Mallory: MitM, DoS, and Tracking Attacks on iOS and macOS Through Apple Wireless Direct Link”, at the 28th USENIX Security Symposium (August 14–16), USA. The paper reveals security and privacy vulnerabilities in Apple’s AirDrop file-sharing service as well as denial-of-service (DoS) attacks which leads to privacy leaks or simultaneous crashing of all neighboring devices. As part of the research, Milan Stute and Alexander Heinrich, two researchers have developed an open-source implementation of Apple AirDrop written in Python - OpenDrop. OpenDrop is like a FOSS implementation of AirDrop. It is an experimental software and is the result of reverse engineering efforts by the Open Wireless Link project (OWL). It is compatible with Apple AirDrop and used for sharing files among Apple devices such as iOS and macOS or on Linux systems running an open re-implementation of Apple Wireless Direct Link (AWDL). The OWL project consists of researchers from the Secure Mobile Networking Lab at TU Darmstadt looking into Apple’s wireless ecosystem. It aims to assess security, privacy and enables cross-platform compatibility for next-generation wireless applications. Currently, OpenDrop only supports Apple devices. However, it does not support all features of AirDrop and may be incompatible with future AirDrop versions. It uses the current version of OpenSSL and libarchive and requires Python 3.6+ version. OpenDrop is licensed under the GNU General Public License v3.0. It is not affiliated with or endorsed by Apple Inc. Limitations in OpenDrop Triggering macOS/iOS receivers via Bluetooth Low Energy: Since Apple devices begin their AWDL interface and AirDrop server only after receiving a custom advertisement via Bluetooth LE, it is possible that Apple AirDrop receivers may not be discovered. Sender/Receiver authentication and connection state: Currently, OpenDrop does not conduct peer authentication. It does not verify that the TLS certificate is signed by Apple's root or not. Also, OpenDrop accepts any file that it receives automatically. Sending multiple files: OpenDrop does not support sending multiple files for sharing, a feature supported by Apple’s AirDrop. Users are excited to try the new OpenDrop implementation. A Redditor comments, “Yesssss! Will try this out soon on Ubuntu.” Another comment reads, “This is neat. I did not realize that enough was known about AirDrop to reverse engineer it. Keep up the good work.” Another user says, “Wow, I can’t wait to try this out! I’ve been in the Apple ecosystem for years and AirDrop was the one thing I was really going to miss.” Few Android users wish to see such implementations in an Android app. A user on Hacker News says, “Would be interesting to see an implementation of this in the form of an Android app, but it looks like that might require root access.” A Redditor comments, “It'd be cool if they were able to port this over to android as well.” To know how to send and receive files using OpenDrop, check out its Github page. Apple announces expanded security bug bounty program up to $1 million; plans to release iOS Security Research Device program in 2020 Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage ‘FaceTime Attention Correction’ in iOS 13 Beta 3 uses ARKit to fake eye contact
Read more
  • 0
  • 0
  • 9712
Banner background image

article-image-google-podcasts-is-transcribing-full-podcast-episodes-for-improving-search-results
Bhagyashree R
28 Mar 2019
2 min read
Save for later

Google Podcasts is transcribing full podcast episodes for improving search results

Bhagyashree R
28 Mar 2019
2 min read
On Tuesday, Android Police reported that Google Podcasts is automatically transcribing episodes. It is using these transcripts as metadata to help users find the podcasts they want to listen even if they don’t know its title or when it was published. Though this is coming into light now, Google’s plan of using transcripts for improving search results has already been shared even before the app was actually launched. In an interview with Pacific Content, Zack Reneau-Wedeen, Google Podcasts product manager, said that Google could “transcribe the podcast and use that to understand more details about the podcast, including when they are discussing different topics in the episode.” This is not a user-facing feature but instead works in the background. You can see the transcription of these podcasts in the web page source of the Google Podcasts web portal. After getting a hint from a user, Android Police searched for “Corbin dabbing port” instead of Corbin Davenport, a writer for Android Police. Sure enough, the app’s search engine showed Episode 312 of the Android Police Podcast, his podcast, as the top result: Source: Android Police The transcription is enabled by Google’s Cloud Speech-to-Text transcription technology. Using transcriptions of such a huge number of podcasts Google can do things like including timestamps, index the contents, and make text easily searchable. This will also allow Google to actually “understand” what is being discussed in the podcasts without having to solely rely on the not-so-detailed notes and descriptions given by the podcasters. This could prove to be quite helpful if users don’t remember much about the shows other than a quote or interesting subject matter and make searching frictionless. As a user-facing feature, this could be beneficial for both a listener and a creator. “It would be great if they would surface this as feature/benefit to both the creator and the listener. It would be amazing to be able to timestamp, tag, clip, collect and share all the amazing moments I've found in podcasts over the years, “ said a Twitter user. Read the full story on Android Police. Google announces the general availability of AMP for email, faces serious backlash from users European Union fined Google 1.49 billion euros for antitrust violations in online advertising Google announces Stadia, a cloud-based game streaming service, at GDC 2019
Read more
  • 0
  • 0
  • 8406

article-image-snapchat-source-code-leaked-and-posted-to-github
Richard Gall
09 Aug 2018
2 min read
Save for later

Snapchat source code leaked and posted to GitHub

Richard Gall
09 Aug 2018
2 min read
Source code for what is believed to be a small part of Snapchat's iOS application was posted on GitHub after being leaked back in May. After being notified, Snap Inc., Snapchat's parent company, immediately filed a DMCA request to GitHub to get the code removed. A copy of the request was found by a 'security researcher' tweeting from the handle @x0rz, who shared a link to a copy of the request on GitHub: https://twitter.com/x0rz/status/1026735377955086337 You can read the DMCA request in full here. [caption id="attachment_21477" align="aligncenter" width="916"] Part of the Snap Inc. DMCA request to GitHub[/caption] The initial leak back in May was caused by an update to the Snapchat iOS application. A spokesperson for Snap Inc. explained to CNET: "An iOS update in May exposed a small amount of our source code and we were able to identify the mistake and rectify it immediately... We discovered that some of this code had been posted online and it has been subsequently removed. This did not compromise our application and had no impact on our community." This code was then published by a someone using the name Khaled Alshehri, believed to be based in Pakistan, on GitHub. The repository created - called Source-SnapChat - has now been taken down. A number of posts linked to the GitHub account suggests that the leaker had tried to contact Snapchat but had been ignored. "I will post it again until I get a reply" they said. https://twitter.com/i5aaaald/status/1025639490696691712 Leaked Snapchat code is still being traded privately Although GitHub has taken the repo down, it's not hard to find people claiming they have a copy of the code that they're willing to trade: https://twitter.com/iSn0we/status/1026738393353465858 Now the code is out in the wild it will take more than a DMCA request to get things under control. Although it would appear the leaked code isn't substantial enough to give much away to potential cybercriminals, it's likely that Snapchat is now working hard to make the changes required to tighten its security.  Read next Snapchat is losing users – but revenue is up 15 year old uncovers Snapchat’s secret visual search function
Read more
  • 0
  • 0
  • 8253

article-image-google-open-sources-filament-rendering-engine-for-android
Sugandha Lahoti
06 Aug 2018
2 min read
Save for later

Google open sources Filament - a physically based rendering engine for Android, Windows, Linux and macOS

Sugandha Lahoti
06 Aug 2018
2 min read
Google has just open-sourced Filament, their physically based rendering (PBR) engine for Android. It can also be used in Windows, Linux, and macOS. Filament provides a set of tools and APIs for Android developers to help them easily create high-quality 2D and 3D rendering. Filament is currently being used in the Sceneform library both at runtime on Android devices and as the renderer inside the Android Studio plugin. Apart from Filament, Google has also open sourced Materials, the full reference documentation for their material system. They have also made available Material Properties which is a reference sheet for the standard material model. Google’s Filament comes packed with the following features: The rendering system is able to perform efficiently on mobile platforms. The primary target is OpenGL ES 3.x class GPUs. The rendering system emphasizes overall picture quality. Artists are able to iterate often and quickly on their assets and the rendering system allows them to do so instinctively. The physically based approach of the system also allows developers to create visually believable materials even if they don’t understand the theory behind the implementation. The system relies on as few parameters as possible to reduce trial and error and allows users to quickly master the material model. The system uses physical units everywhere possible: distances in meters or centimeters, color temperatures in Kelvin, light units in lumens or candelas, etc. The rendering library is as small as possible so any application can bundle it without increasing the binary to unwanted sizes. Filament APIs There are two major APIs used. Native C++ API for Android, Linux, macOS, and Windows Java/JNI API for Android, Linux, macOS, and Windows Backends OpenGL 4.1+ for Linux, macOS, and Windows OpenGL ES 3.0+ for Android Vulkan 1.0 for Android, Linux, macOS (with MoltenVk) and Windows A sample material rendered with Filament. Source: Github You can check out the Filament Documentation, for an in-depth explanation of real-time PBR, the graphics capabilities and implementation of Filament. Google open sources Seurat to bring high precision graphics to Mobile VR Google releases Android Things library for Google Cloud IoT Core Google updates biometric authentication for Android P, introduces BiometricPrompt API
Read more
  • 0
  • 0
  • 8145
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-releases-oboe-a-c-library-to-build-high-performance-android-audio-apps
Bhagyashree R
12 Oct 2018
3 min read
Save for later

Google releases Oboe, a C++ library to build high-performance Android audio apps

Bhagyashree R
12 Oct 2018
3 min read
Yesterday, Google released the first production-ready version of Oboe. It is a C++ library for building real-time audio apps. One of its main benefits includes the lowest possible audio latency across the widest range of Android devices. It is similar to AndroidX for native audio. How Oboe works The communication between apps and Oboe happens by reading and writing data to streams.  This library facilitates the movement of audio data between your app and the audio inputs and outputs on your Android device. The apps are able to pass data in and out by reading from and writing to audio streams, represented by the class AudioStream. A stream consists of the following: Audio device An audio device is a hardware interface or virtual endpoint that acts as a source or sink for a continuous stream of digital audio data. For example, a built-in mic or bluetooth headset. Sharing mode The sharing mode determines whether a stream has exclusive access to an audio device that might otherwise be shared among multiple streams. Audio format This the format of the audio data in the stream. The data that is passed through a stream has the usual digital audio attributes, which developers must specify when defining a stream. These are as follows: Sample format Samples per frame Sample rate The following sample formats are allowed by Oboe: Source: GitHub What are its benefits Oboe leverages the improved performance and features of AAudio on Orea MR1 (API 27+) and also maintains backward compatibility on API 16+. The following are some of its benefits: You write and maintain less code: It uses C++ allowing you to write clean and elegant code. With Oboe you can create an audio stream in just three lines of code whereas, when using OpenSL ES the same thing requires 50+ lines. Accelerated release process: As Oboe is supplied as a source library, bug fixes can be rolled out in few days as opposed to the Android platform release cycle. Better bug handling and less guesswork: It provides workarounds for known audio bugs and has sensible default behaviour for stream properties. Open source: It is open source and maintained by Google engineers. To get started with Oboe, check out the full documentation and the code samples available on its GitHub repository. Also, read the announcement posted on the Android Developers Blog. What role does Linux play in securing Android devices? A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Google announces updates to Chrome DevTools in Chrome 71
Read more
  • 0
  • 0
  • 7211

article-image-ethical-mobile-operating-system-eelo-an-alternative-for-android-and-ios-is-in-beta
Prasad Ramesh
11 Oct 2018
5 min read
Save for later

‘Ethical mobile operating system’ /e/, an alternative for Android and iOS, is in beta

Prasad Ramesh
11 Oct 2018
5 min read
Right now Android and iOS are the most widely used OSes on mobile phones. Both owned by giant corporates and there are no other offerings that are in line with public interest, privacy, or affordability. Android is owned by Google, can’t say it is pro user privacy with all the tracking they do. iOS by Apple is a very closed OS and not to mention that it isn’t exactly affordable to the masses. Apart from some OSes in the works, there is an OS called /e/ or eelo from the creator of Mandrake-Linux, focused on user privacy. Some OSes in the works Some of the mobile OSes include Tizen from Samsung which it had released only with entry level smartphones. There is also an OS in the making by Huawei. Google has also been working on a new OS called Fuchsia. It uses a new microkernel called Zicron created by Google, instead of Linux. It is also in the early stages and there is no clear indication behind the purpose of building Fuchsia when Android is ubiquitous in the market. Google was fined for $5B regarding Android antitrust earlier this year, maybe Fuchsia can come into picture here. In response to EU’s decision to fine Google, Sundar Pichai said that preventing Google from bundling its apps would “upset the balance of the Android ecosystem” and that the Android business model guaranteed zero charges for the phone makers. This seems like a warning from Google to consider licensing Android to phone makers. Will curtains be closed on Android over legal disputes? That does not seem very likely considering Android smartphones and Google’s services in these smartphones are a big source of income for Google. They would not let it go that easily and I’m not sure if the world is ready to let go of the Android OS either. It has given access to apps, information, connectivity to the large masses. However, there is growing discontent among Android users, developers and handset partners. Whether that frustration will pivot enough to create a viable market for alternative mobile OS, is something only time can tell. Either way, there is one OS called /e/ or eelo intent on displacing Android. It has made some progress but is not an OS made from scratch exactly. What is eelo? The above mentioned OSes are far from complete and owned by large corporations. Here comes eelo, it is free and open-source. It is a forked LineageOS with all the Google apps and services removed. But that’s not all, it also has a select few default applications, a new user interface, and several integrated online services. The “/e/” ROM is in Beta stage and can be installed on several devices. More devices will be supported as more contributors port and maintain for different devices. The ROM uses microG instead of Google’s core apps. It uses Mozilla NLP which will make geolocation available even when GPS signal is not available. eelo project leader, Gaël Duval states: “At /e/, we want to build an alternative mobile operating system that everybody can enjoy using, one that is a lot more respectful of user’s data privacy while offering them real freedom of choice. We want to be known as the ethical mobile operating system, built in the public interest.” BlissLauncher is included with original icons and support for widgets and auto icon sizing based on screen pixel density. There are new default applications, a mail app, an SMS app (Signal), a chat application (Telegram), along with a weather app, a note app, a tasks app and a maps app. There is an /e/ account manager in which users can choose to use a single /e/ identity (user@e.email) for all services. It will also have OTA updates. The default search engine is searX with Qwant and DuckDuckGo as alternatives. They also plant to open a project in the personal assistant area. How has the market reacted to eelo? Early testers seem happy with /e/ or alternatively called as eelo. https://twitter.com/lowpunk/status/1050032760373633025 https://twitter.com/rvuong_geek/status/1048541382120525824 There are also some negative reactions where people don’t really welcome this new “mobile OS”. A comment on reddit by user JaredTheWolfy says: “This sounds like what Cyanogen tried to do, but at least Cyanogen was original and created a legacy for the community.” Another comment by user MyNDSETER on reddit reads: “Don't trust Google with your data. Trust us instead. Oh gee ok and I'll take some stickers as well.” Yet another reddit user zdakat says: “I guess that's the android version of I made my own cryptocurrency! (by changing a few strings in Bitcoin source, or the new thing: by deploying a token on Ethereum)” You can check out a detailed article about eelo on Hackernoon, and the /e/ website. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Microsoft Your Phone: Mirror your Android phone apps on Windows Android Studio 3.2 releases with Android App Bundle, Energy Profiler, and more!
Read more
  • 0
  • 0
  • 7181

article-image-what-to-expect-from-d-programming-language-in-the-near-future
Fatema Patrawala
17 Oct 2019
3 min read
Save for later

What to expect from D programming language in the near future

Fatema Patrawala
17 Oct 2019
3 min read
On Tuesday, Atila Neves the Deputy leader for D programming language posted about his vision for D and what he would like to do with D lang in the near future. Make D programming language default for web dev and mobile applications D’s static reflection and code generation capabilities make it an ideal candidate to implement a codebase that needs to be called from several different languages and environments (e.g. Python, Java, R). Traditionally this is done by specifying data structures and RPC calls in an Interface Definition Language (IDL) then translating that to the supported languages, with a wire protocol to go along with it. With D, none of that is necessary. One can write the production code in D and have libraries automatically making the code callable from other languages. Hence it will be easy to write D code that runs as fast or faster than the alternatives, and it will be a win on all fronts. Memory Safety for D lang Atila believes that D is a systems programming language with value types and pointers, it isn’t memory safe. He says that DIP1000 is in the right direction, but it still needs to be memory safe unless programmers opt-out via @trusted block or function. The DIP1000 proposal includes a scope mechanism that will know when the lifetime of a reference is over by providing a mechanism to guarantee that a reference cannot escape lexical scope. Thus it can safely implement memory management schemes rather than tracing the garbage collection. Safe and easy concurrency in D programming language As per Atila safe and easy concurrency in D is mostly achieved through actor models, but they still need to finalize shards and make everything @safe as well. Centralizing all reflection needs with an API Atila says instead of disparate ways of getting things done with fragmented APIs like (__traits, std.traits, custom code), he would like there to be a library that centralizes all reflection needs with a great API. Easy interoperability for C++ developers C++ has been successful so far in making the transition from C virtually seamless. Atila wants current C++ programmers with legacy codebases to just as easily be able to start writing D code. Faster development times D needs a fast interpreter so that developers can skip machine code generation and linking. This should be the default way of running unittest blocks for faster feedback, with programmers only compiling their code for runtime performance and/or to ship binaries to final users. String interpolation in D programming language Code generation is one of D’s greatest strengths, and token strings enable visually pleasing blocks of code that are actually “just strings”. Hence, String interpolation would make it vastly easier to use. To know more about D programming language, check out the official post by Atila Neves. “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett The V programming language is now open source – is it too good to be true? Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 7180

article-image-facebook-released-hermes-an-open-source-javascript-engine-to-run-react-native-apps-on-android
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Facebook released Hermes, an open source JavaScript engine to run React Native apps on Android

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Facebook released a new JavaScript engine called Hermes under an open source MIT license. According to Facebook, this new engine will speed up start times for native Android apps built with React Native framework. https://twitter.com/reactnative/status/1149347916877901824 Facebook software engineer Marc Horowitz unveiled Hermes at the Chain React 2019 conference held yesterday in Portland, Oregon. Hermes is a new tool for developers to primarily improve app startup performance in the same way Facebook does for its apps, and to make apps more efficient on low-end smartphones. The supposed advantage of Hermes is that developers can target all three mobile platforms with a single code base; but as with any cross-platform framework, there are trade offs in terms of performance, security and flexibility. Hermes is available on GitHub for all developers to use. It has also got its own Twitter account and home page. In a demo, Horowitz showed that a React Native app with Hermes was fully loaded within half the time the same app without Hermes loaded, or about two seconds faster. Check out the video below: Horowitz emphasized on the fact that Hermes cuts the APK size (the size of the app file) to half the 41MB of a stock React Native app, and removes a quarter of the app's memory usage. In other words, with Hermes developers can get users interacting with an app faster with fewer obstacles like slow download times and constraints caused by multiple apps sharing in a limited memory resources, especially on lower-end phones. And these are exactly the phones Facebook is aiming at with Hermes, compared to the fancy high-end phones that well-paid developers typically use themselves. "As developers we tend to carry the latest flagship devices. Most users around the world don't," he said. "Commonly used Android devices have less memory and less storage than the newest phones and much less than a desktop. This is especially true outside of the United States. Mobile flash is also relatively slow, leading to high I/O latency." It's not every day a new JavaScript engine is born, but while there are plenty such engines available for browsers, like Google's V8, Mozilla's SpiderMonkey, Microsoft's Chakra, Horowitz notes Hermes is not aimed at browsers or, for example, how Node.js on the server side. "We're not trying to compete in the browser space or the server space. Hermes could in theory be for those kinds of use cases, that's never been our goal." The Register reports that Facebook has no plan to push Hermes' beyond React Native to Node.js or to turn it into the foundation of a Facebook-branded browser. This is because it's optimized for mobile apps and wouldn't offer advantages over other engines in other usage scenarios. Hermes tries to be efficient through bytecode precompilation – rather than loading JavaScript and then parsing it. Hermes employs ahead-of-time (AOT) compilation during the mobile app build process to allow for more extensive bytecode optimization. Along similar lines, the Fuchsia Dart compiler for iOS is an AOT compiler. There are other ways to squeeze more performance out of JavaScript. The V8 engine, for example, offers a capability called custom snapshots. However, this is a bit more technically demanding than using Hermes. Hermes also abandons the just in time (JIT) compiler used by other JavaScript engines to compile frequently interpreted code into machine code. In the context of React Native, the JIT doesn't do that much to ease mobile app workloads. The reason Hermes exists, as per Facebook, is to make React Native better. "Hermes allows for more optimization on mobile since developers control the build stack," said a Facebook spokesperson in an email to The Register. "For example, we implemented bytecode precompilation to improve performance and developed more efficient garbage collection to reduce memory usage." In a discussion on Hacker News, Microsoft developer Andrew Coates claims that internal testing of Hermes and React Native in conjunction with Microsoft Office for Android shows TTI using Hermes at 1.1s, compared to 1.4s for V8, and with 21.5MB runtime memory impact, compared to 30MB with V8. Hermes is mostly compatible with ES6 JavaScript. To keep the engine small, support for some language features is missing, like with statements and local mode eval(). Facebook’s spokesperson also said to The Register that they are planning to publish benchmark figures in the next week to support its performance claims. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more
Read more
  • 0
  • 0
  • 7084
article-image-ionic-framework-4-0-has-just-been-released-now-backed-by-web-components-not-angular
Richard Gall
23 Jan 2019
4 min read
Save for later

Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular

Richard Gall
23 Jan 2019
4 min read
Ionic Framework today released Ionic Framework 4.0. The release is a complete rebuild of the popular JavaScript framework for developing mobile and desktop apps. Although Ionic has, up until now, Ionic was built using Angular components, this new version has instead been built using Web Components. This is significant, as it changes the whole ball game for the project. It means Ionic Framework is now an app development framework that can be used alongside any front end frameworks, not just Angular. The shift away from Angular makes a lot of sense for the project. It now has the chance to grow adoption beyond the estimated five million developers around the world already using the framework. While in the past Ionic could only be used by Angular developers, it now opens up new options for development teams - so, rather than exacerbating a talent gap in many organizations, it could instead help ease it. However, although it looks like Ionic is taking a significant step away from Angular, it's important to note that, at the moment, Ionic Framework 4.0 is only out on general availability for Angular - it's still only in Alpha for Vue.js and React. Ionic Framework 4.0 and open web standards Although the move to Web Components is the stand-out change in Ionic Framework 4.0, it's also worth noting that the release has been developed in accordance with open web standards. This has been done, according to the team, to help organizations develop Design Systems (something the Ionic team wrote about just a few days ago) - essentially, using a set of guidelines and components that can be reused across multiple platforms and products to maintain consistency across various user experience touch points. Why did the team make the changes to Ionic Framework 4.0 that they did? According to Max Lynch, Ionic Framework co-founder and CEO, the changes present in Ionic Framework 4.0 should help organizations achieve brand consistency quickly, and to give development teams the option of using Ionic with their JavaScript framework of choice. Lynch explains: "When we look at what’s happening in the world of front-end development, we see two major industry shifts... First, there’s a recognition that the proliferation of proprietary components has slowed down development and created design inconsistencies that hurt users and brands alike. More and more enterprises are recognizing the need to adopt a design system: a single design spec, or library of reusable components, that can be shared across a team or company. Second, with the constantly evolving development ecosystem, we recognized the need to make Ionic compatible with whatever framework developers wanted to use—now and in the future. Rebuilding our Framework on Web Components was a way to address both of these challenges and future-proof our technology in a truly unique way." What does Ionic Framework 4.0 tell us about the future of web and app development? Ionic Framework 4.0 is a really interesting release as it tells us a lot about where web and app development is today. It confirms to us, for example, that Angular's popularity is waning. It also suggests that Web Components are going to be the building blocks of the web for years to come - regardless of how frameworks evolve. As Lynch writes in a blog post introducing Ionic Framework 4.0, "in our minds, it was clear Web Components would be the way UI libraries, like Ionic, would be distributed in the future. So, we took a big bet and started porting all 100 of our components over." Ionic Framework 4.0 also suggests that Progressive Web Apps are here to stay too. Lynch writes in the blog post linked to above that "for Ionic to reach performance standards set by Google, new approaches for asynchronous loading and delivery were needed." To do this, he explains, the team "spent a year building out a web component pipeline using Stencil to generate Ionic’s components, ensuring they were tightly packed, lazy loaded, and delivered in smart collections consisting of components you’re actually using." The time taken to ensure that the framework could meet those standards - essentially, that it could support high performance PWAs - underscores that this will be one of the key use cases for Ionic in the years to come.  
Read more
  • 0
  • 0
  • 7038

article-image-flutter-challenges-electron-soon-to-release-a-desktop-client-to-accelerate-mobile-development
Bhagyashree R
03 Dec 2018
3 min read
Save for later

Flutter challenges Electron, soon to release a desktop client to accelerate mobile development

Bhagyashree R
03 Dec 2018
3 min read
On Saturday, the Flutter team announced that, as a competition to Electron, they will soon be releasing native desktop client to accelerate mobile development. Flutter native desktop client will come with support for resizing the emulator during runtime, using assets from PC, better RAM usage, and more. Flutter is Google’s open source mobile app SDK, which enables developers to write once and deploy natively on different platforms such as Android, iOS, Windows, Mac, and Linux. Additionally, they can also share the business logic to the web using AngularDart. Here’s what Flutter for desktop brings in: Resizable emulator during runtime To check how your layout looks on different screen sizes you need to create different emulators, which is quite cumbersome. To solve this issue Flutter desktop client will provide resizable emulator. Use assets saved on your PC When working with apps that interact with assets on the phone, developers have to first move all the testing files to the emulator or the device. With this desktop client, you can simply pick the file you want with your native file picker. Additionally, you don’t have to make any changes to the code as the desktop implementation uses the same method as the mobile implementation. Hot reloads and debugging Hot reload and debugging allows quickly experimenting, building UIs, adding new features, and fixing bugs. The desktop client supports these capabilities for better productivity. Better RAM usage The Android emulator consumes up to 1 GM RAM, but the RAM usage becomes worse when you are running the IntelliJ and the ever-RAM-hungry Chrome. Since the embedder is running natively, there will be no need for Android. Universally usable widgets You will be able to universally use most of the widgets such as buttons, loading indicators, etc that you create. And those widgets that require a different look as per the platform can be encapsulated pretty easily but checking the TargetPlatfrom property. Pages and plugins Pages differ in layout, depending on the platform and screen sizes, but not in functionality. You will be able to easily create accurate layouts for each platform with PageLayoutWidget. With regards to plugins, you do not have to make any changes in the Flutter code when using a plugin that also supports the desktop embedder. The Flutter desktop client is still in alpha, which means there will be more changes in the future. Read the official announcement on Medium. Google Flutter moves out of beta with release preview 1 Google Dart 2.1 released with improved performance and usability JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript
Read more
  • 0
  • 0
  • 6813

article-image-homebrew-2-2-releases-with-support-for-macos-catalina
Vincy Davis
28 Nov 2019
3 min read
Save for later

Homebrew 2.2 releases with support for macOS Catalina

Vincy Davis
28 Nov 2019
3 min read
Yesterday, the project manager of Homebrew, Mike McQuaid, announced the release of Homebrew 2.2. This is the third release of Homebrew this year. Some of the major highlights of this new version include support to macOS Catalina, faster implementations of  HOMEBREW_AUTO_UPDATE_SECS and brew upgrade’s post-install dependent checking, and more. Read More: After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption New key features in Homebrew 2.2 Homebrew will now support macOS Catalina (10.15), support to macOS Sierra (10.12) and older are unsupported The speed of the no-op case for HOMEBREW_AUTO_UPDATE_SECS has become extremely fast and defaults to 5 minutes instead of 1 The brew upgrade will no longer give an unsuccessful error code if the formula is up-to-date. Homebrew upgrade’s post-install dependent checking is now exceedingly faster and reliable. Homebrew on Linux has updated and raised its minimum requirements. Starting from Homebrew 2.2, the software package management system will use OpenSSL 1.1. The Homebrew team has disabled the brew tap-pin since it was buggy and not much used by Homebrew maintainers. It will stop supporting Python 2.7 by the end of 2019 as it will reach EOL. Read More: Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications Many users are excited about this release and have appreciated the maintainers of Homebrew for their efforts. https://twitter.com/DVJones89/status/1199710865160843265 https://twitter.com/dirksteins/status/1199944492868161538 A user on Hacker News comments, “While Homebrew is perhaps technically crude and somewhat inflexible compared to other and older package managers, I think it deserves real credit for being so easy to add packages to. I contributed Homebrew packages after a few weeks of using macOS, while I didn't contribute a single package in the ten years I ran Debian. I'm also impressed by the focus of the maintainers and their willingness for saying no and cutting features. We need more of that in the programming field. Homebrew is unashamedly solely for running the standard configuration of the newest version of well-behaved programs, which covers at least 90% of my use cases. I use Nix when I want something complicated or nonstandard.” To know about the features in detail, head over to Hombrew’s official page. Announcing Homebrew 2.0.0! Homebrew 1.9.0 released with periodic brew cleanup, beta support for Linux, Windows and much more! Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks? ActiveState adds thousands of curated Python packages to its platform Firefox Preview 3.0 released with Enhanced Tracking Protection, Open links in Private tab by default and more
Read more
  • 0
  • 0
  • 6567
article-image-react-native-0-60-releases-with-accessibility-improvements-androidx-support-and-more
Bhagyashree R
04 Jul 2019
4 min read
Save for later

React Native 0.60 releases with accessibility improvements, AndroidX support, and more

Bhagyashree R
04 Jul 2019
4 min read
Yesterday, the team behind React Native announced the release of React Native 0.60. This release brings accessibility improvements, a new app screen, AndroidX support, CocoaPods in iOS by default, and more. Following are some of the updates introduced in React Native 0.60: Accessibility improvements This release ships with several improvements to accessibility APIs both on Android and iOS. As the new features directly use APIs provided by the underlying platform, they’ll easily integrate with native assistance technologies. Here are some of the accessibility updates to React Native 0.60: A number of missing roles have been added for various components. There’s a new Accessibility States API for better web support in the future. AccessibilityInfo.announceForAccessibility is now supported on Android. Extended accessibility actions will now include callbacks that deal with accessibility around user-defined actions. iOS accessibility flags and reduce motion are now supported on iOS. A clickable prop and an onClick callback are added for invoking actions via keyboard navigation. A new start screen React Native 0.60 comes with a new app screen, which is more user-friendly. It shows useful instructions like editing App.js, links to the documentation, how you can start the debug menu, and also aligns with the upcoming website redesign. https://www.youtube.com/watch?v=ImlAqMZxveg CocoaPods are now part of React Native's iOS project React Native for iOS now comes with CocoaPods by default, which is an application level dependency manager for Swift and Objective-C Cocoa projects. Developers are recommended to open the iOS platform code using the ‘xcworkspace’ file from now on. Additionally, the Pod specifications for the internal packages have been updated to make them compatible with the Xcode projects, which will help with troubleshooting and debugging. Lean Core removals In order to bring the React Native repository to a manageable state, the team started the Lean Core project. As a part of this project, they have extracted WebView and NetInfo into separate repositories. With React Native 0.60, the team has finished migrating them out of the React Native repository. Geolocation has also been extracted based on the community feedback about the new App Store policy. Autolinking for iOS and Android React Native libraries often consist of platform-specific or native code. The autolinking mechanism enables your project to discover and use this code. With this release, the React Native CLI team has made major improvements to autolinking. Developers using React Native before version 0.60, are advised to unlink native dependencies from a previous install. Support for AndroidX (Breaking change) With this release, React Native has been migrated to AndroidX (Android Extension library). As this is a breaking change, developers need to migrate all their native code and dependencies as well. The React Native community has come up with a temporary solution for this called “jetifier”, an AndroidX transition tool in npm format, with a react-native compatible style. Many users are excited about the release and considered it to be the biggest RN release. https://twitter.com/cipriancaba/status/1146411606076792833 Other developers shared some tips for migrating to AndroidX, which is an open source project that maps the original support library API packages into the androidx namespace. We can’t use both AndroidX and the old support library together, which means “you are either all in or not in at all.” Here’s a piece of good advice shared by a developer on Reddit: “Whilst you may be holding off on 0.60.0 until whatever dependency you need supports X you still need to make sure you have your dependency declarations pinned down good and proper, as dependencies around the react native world start switching over if you automatically grab a version with X when you are not ready your going to get fun errors when building, of course this should be a breaking change worthy of a major version number bump but you never know. Much safer to keep your versions pinned and have a googlePlayServicesVersion in your buildscript (and only use libraries that obey it).” Considering this release has major breaking changes, others are also suggesting to wait for some time till 0.60.2 comes out. “After doing a few major updates, I would suggest waiting for this update to cool down. This has a lot of breaking changes, so I would wait for at least 0.60.2 to be sure that all the major requirements for third-party apps are fulfilled ( AndroidX changes),” a developer commented on Reddit. Along with these exciting updates, the team and community have introduced a new tool named Upgrade Helper to make the upgrade process easier. To know more in detail check out the official announcement. React Native VS Xamarin: Which is the better cross-platform mobile development framework? Keeping animations running at 60 FPS in a React Native app [Tutorial] React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial]  
Read more
  • 0
  • 0
  • 6559

article-image-meet-sapper-a-military-grade-pwa-framework-inspired-by-next-js
Sugandha Lahoti
10 Jul 2018
3 min read
Save for later

Meet Sapper, a military grade PWA framework inspired by Next.js

Sugandha Lahoti
10 Jul 2018
3 min read
There is a new web application framework in town. Categorized as “Military grade” by its creator, Rich Harris, Sapper is a Next.js-style framework that is almost close to being the ideal web application framework. [box type="info" align="" class="" width=""]Fun Fact: Sapper, the name comes from the term for combat engineers, hence the term Military grade. It is also short for Svelte app maker.[/box] Sapper offers high grade development experience, with declarative routing, hot-module replacement, and scoped styles. It also includes modern development practices at par with current web application frameworks such as code-splitting, server-side rendering, and offline support. It is powered by Svelte, the UI framework which is essentially a compiler that turns app components into standalone JavaScript modules. What makes Sapper unique is that it dramatically reduces the amount of code that gets sent to the browser. In the RealWorld project challenge, Sapper implementation took 39.6kb (11.8kb zipped) to render an interactive homepage. The entire app cost 132.7kb (39.9kb zipped), which is significantly smaller than the reference React/Redux implementation at 327kb (85.7kb). Infact, the implementation totals 1,201 lines of source code, compared to 2,377 for the reference implementation. Another crucial feature of Sapper is code splitting. If an app uses React or Vue, there's a hard lower bound on the size of the initial code-split chunk, the framework itself, which is likely to be a significant portion of the total app size. Sapper has no lower bound for initial code splitting, which makes the app even faster. The framework is also extremely performant, memory-efficient, and easy to learn with Svelte's template syntax. It has scoped CSS, with unused style removal and minification built-in. The framework also has a svelte/store, a tiny global store that synchronises state across the component hierarchy with zero boilerplate. Currently Sapper is not released in version 1.0.0. Currently, Svelte's compiler operates at the component level. For the stable release, the team is looking for ‘whole-app optimisation' where the compiler understands the boundaries between the components to generate even more efficient code. Also, because Sapper is written in TypeScript, there may be plans to officially support TypeScript. Sapper may not be ready yet to take over an established framework such as React. The reason being, that the developers may have an aversion to any form of 'template language'. Moreover, React is extremely flexible and appealing to new developers. This is because of its highly active community and other learning resources, in particular, the devtools, editor integrations, tutorials, StackOverflow answers, and even job opportunities. When compared to such a giant, Sapper still has a long way to go. You can view the framework's progress and contribute your own ideas at Sapper GitHub and Gitter. Top frameworks for building your Progressive Web Apps (PWA) 5 reasons why your next app should be a PWA (progressive web app) Progressive Web AMPs: Combining Progressive Wep Apps and AMP
Read more
  • 0
  • 0
  • 6521