Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-twitter-and-facebook-removed-accounts-of-chinese-state-run-media-agencies-aimed-at-undermining-hong-kong-protests
Sugandha Lahoti
20 Aug 2019
5 min read
Save for later

Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests

Sugandha Lahoti
20 Aug 2019
5 min read
Update August 23, 2019: After Twitter, and Facebook Google has shutdown 210 YouTube channels that were tied to misinformation about Hong Kong protesters. The article has been updated accordingly. Chinese state-run media agencies have been buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. These ads, reported by Pinboard’s Twitter account were circulated by State-run news agency Xinhua calling these protesters as those "escalating violence" and calls for "order to be restored." In reality, Hong Kong protests have been called a completely peaceful march. Pinboard warned and criticized Twitter about these tweets and asked for its takedown. Though Twitter and Facebook are banned in China, the Chinese state-run media runs several English-language accounts to present its views to the outside world. https://twitter.com/pinboard/status/1162711159000055808 https://twitter.com/Pinboard/status/1163072157166886913 Twitter bans 936 accounts managed by the Chinese state Following this revelation, in a blog post yesterday, Twitter said that they are discovering a “significant state-backed information operation focused on the situation in Hong Kong, specifically the protest movement”.  They identified 936 accounts that were undermining “the legitimacy and political positions of the protest movement on the ground.” They found a larger, spammy network of approximately 200,000 accounts which represented the most active portions of this campaign. These were suspended for a range of violations of their platform manipulation policies.  These accounts were able to access Twitter through VPNs and over a "specific set of unblocked IP addresses" from within China. “Covert, manipulative behaviors have no place on our service — they violate the fundamental principles on which our company is built,” said Twitter. Twitter bans ads from Chinese state-run media Twitter also banned advertising from Chinese state-run news media entities across the world and declared that affected accounts will be free to continue to use Twitter to engage in public conversation, but not in their advertising products. This policy will apply to news media entities that are either financially or editorially controlled by the state, said Twitter. They will be notified directly affected entities who will be given 30 days to offboard from advertising products. No new campaigns will be allowed. However, Pinboard argues that 30 days is too long; Twitter should not wait and suspend Xinhua's ad account immediately. https://twitter.com/Pinboard/status/1163676410998689793 It also calls on Twitter to disclose: How much money it took from Xinhua How many ads it ran for them since the start of the Hong Kong protests in June and How those ads were targeted Facebook blocks Chinese accounts engaged in inauthentic behavior Following a tip shared by Twitter, Facebook also removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior as part of a small network that originated in China and focused on Hong Kong. However, unlike Twitter, Facebook did not announce any policy changes in response to the discovery. YouTube was also notably absent in the fight against Chinese misinformation propagandas. https://twitter.com/Pinboard/status/1163694701716766720 However, on 22nd August, Youtube axed 210 Youtube channels found to be spreading misinformation about the Hong Kong protests. “Earlier this week, as part of our ongoing efforts to combat coordinated influence operations, we disabled 210 channels on YouTube when we discovered channels in this network behaved in a coordinated manner while uploading videos related to the ongoing protests in Hong Kong,” Shane Huntley, director of software engineering for Google Security’s Threat Analysis Group said in a blog post. “We found use of VPNs and other methods to disguise the origin of these accounts and other activity commonly associated with coordinated influence operations.” Kyle Bass, Chief Investment Officer Hayman Capital Management, called on all social media outlets to ban all Chinese state-run propaganda sources. He tweeted, “Twitter, Facebook, and YouTube should BAN all State-backed propaganda sources in China. It’s clear that these 200,000 accounts were set up by the “state” of China. Why allow Xinhua, global times, china daily, or any others to continue to act? #BANthemALL” Public acknowledges Facebook and Twitter’s role in exposing Chinese state media Experts and journalists were appreciative of the role social media played in exposing those guilty and liked how they are responding to state interventions. Bethany Allen-Ebrahimian, President of the International China Journalist Association called it huge news. “This is the first time that US social media companies are openly accusing the Chinese government of running Russian-style disinformation campaigns aimed at sowing discord”, she tweeted. She added, “We’ve been seeing hints that China has begun to learn from Russia’s MO, such as in Taiwan and Cambodia. But for Twitter and Facebook to come out and explicitly accuse the Chinese govt of a disinformation campaign is another whole level entirely.” Adam Schiff, Representative (D-CA 28th District) tweeted, “Twitter and Facebook announced they found and removed a large network of Chinese government-backed accounts spreading disinformation about the protests in Hong Kong. This is just one example of how authoritarian regimes use social media to manipulate people, at home and abroad.” He added, “Social media platforms and the U.S. government must continue to identify and combat state-backed information operations online, whether they’re aimed at disrupting our elections or undermining peaceful protesters who seek freedom and democracy.” Social media platforms took an appreciable step against Chinese state-run media actors attempting to manipulate their platforms to discredit grassroots organizing in Hong Kong. It would be interesting to see if they would continue to protect individual freedoms and provide a safe and transparent platform if state actors from countries where they have a huge audiences like India or US, adopted similar tactics to suppress or manipulate the public or target movements. Facebook bans six toxic extremist accounts and a conspiracy theory organization Cloudflare terminates services to 8chan following yet another set of mass shootings in the US YouTube’s ban on “instructional hacking and phishing” videos receives backlash from the infosec community
Read more
  • 0
  • 0
  • 2311

article-image-apple-announces-webkit-tracking-prevention-policy-that-considers-web-tracking-as-a-security-vulnerability
Bhagyashree R
19 Aug 2019
5 min read
Save for later

Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability

Bhagyashree R
19 Aug 2019
5 min read
Inspired by Mozilla’s anti-tracking policy, Apple has announced its intention to implement the WebKit Tracking Prevention Policy into Safari, the details of which it shared last week. This policy outlines the types of tracking techniques that will be prevented in WebKit to ensure user privacy. The anti-tracking mitigations listed in this policy will be applied “universally to all websites, or based on algorithmic, on-device classification.” https://twitter.com/webkit/status/1161782001839607809 Web tracking is the collection of user data over multiple web pages and websites, which can be linked to individual users via a unique user identifier. All your previous interactions with any website could be recorded and recalled with the help of a tracking system like cookies. Among the data tracked include the things you have searched, the websites you visited, the things you have clicked on, the movements of your mouse around a web page, and more. Organizations and companies rely heavily on web tracking to gain insight into their user behavior and preferences. One of the main purposes of these insights is user profiling and targeted marketing. While this user tracking helps businesses, it can be pervasive and used for other sinister purposes. In the recent past, we have seen many companies including the big tech like Facebook and Google involved in several scandals related to violating user online privacy. For instance, Facebook’s Cambridge Analytica scandal and Google’s cookie case. Apple aims to create “a healthy web ecosystem, with privacy by design” The WebKit Prevention Policy will prevent several tracking techniques including cross-site tracking, stateful tracking, covert stateful tracking, navigational tracking, fingerprinting, covert tracking, and other unknown techniques that do not fall under these categories. WebKit will limit the capability of using a tracking technique in case it is not possible to prevent it without any undue harm to the user. If this also does not help, users will be asked for their consent. Apple will treat any attempt to subvert the anti-tracking methods as a security vulnerability. “We treat circumvention of shipping anti-tracking measures with the same seriousness as an exploitation of security vulnerabilities,” Apple wrote. It warns to add more restrictions without prior notice against parties who attempt to circumvent the tracking prevention methods. Apple further mentioned that there won’t be any exception even if you have a valid use for a technique that is also used for tracking. The announcement reads, “But WebKit often has no technical means to distinguish valid uses from tracking, and doesn’t know what the parties involved will do with the collected data, either now or in the future.” WebKit Tracking Prevention Policy’s unintended impact With the implementation of this policy, Apple warns of certain unintended repercussions as well. Among the possibly affected features are funding websites using targeted or personalized advertising, federated login using a third-party login provider, fraud prevention, and more. In cases of tradeoffs, WebKit will prioritize user benefits over current website practices. Apple promises to limit this unintended impact and might update the tracking prevention methods to permit certain use cases. In the future, it will also come up with new web technologies that will allow these practices without comprising the user online privacy such as Storage Access API and Privacy-Preserving Ad Click Attribution. What users are saying about Apple’s anti-tracking policy A time when there is increasing concern regarding user online privacy, this policy comes as a blessing. Many users are appreciating this move, while some do fear that this will affect some of the user-friendly features. In an ongoing discussion on Hacker News, a user commented, “The fact that this makes behavioral targeting even harder makes me very happy.” Some others also believe that focusing on online tracking protection methods will give browsers an edge over Google’s Chrome. A user said, “One advantage of Google's dominance and their business model being so reliant on tracking, is that it's become the moat for its competitors: investing energy into tracking protection is a good way for them to gain a competitive advantage over Google, since it's a feature that Google will not be able to copy. So as long as Google's competitors remain in business, we'll probably at least have some alternatives that take privacy seriously.” When asked about the added restrictions that will be applied if a party is found circumventing tracking prevention, a member of the WebKit team commented, “We're willing to do specifically targeted mitigations, but only if we have to. So far, nearly everything we've done has been universal or algorithmic. The one exception I know of was to delete tracking data that had already been planted by known circumventors, at the same time as the mitigation to stop anyone else from using that particular hole (HTTPS supercookies).” Some users had questions about the features that will be impacted by the introduction of this policy. A user wrote, “While I like the sentiment, I hate that Safari drops cookies after a short period of non-use. I wind up having to re-login to sites constantly while Chrome does it automatically.” Another user added, “So what is going to happen when Apple succeeds in making it impossible to make any money off advertisements shown to iOS users on the web? I'm currently imagining a future where publishers start to just redirect iOS traffic to install their app, where they can actually make money. Good news for the walled garden, I guess?” Read Apple’s official announcement, to know more about the WebKit Tracking Prevention Policy. Firefox Nightly now supports Encrypted Server Name Indication (ESNI) to prevent 3rd parties from tracking your browsing history All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users  
Read more
  • 0
  • 0
  • 3157

article-image-terrifyingly-realistic-deepfake-video-of-bill-hader-transforming-into-tom-cruise-is-going-viral-on-youtube
Sugandha Lahoti
14 Aug 2019
4 min read
Save for later

Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube

Sugandha Lahoti
14 Aug 2019
4 min read
Deepfakes are becoming scaringly and indistinguishably real. A YouTube clip of Bill Hader in conversation with David Letterman on his late-night show in 2008 is going viral where Hader’s face subtly shifts to Cruise’s as Hader does his impression. This viral Deepfake clip has been viewed over 3 million times and is uploaded by Ctrl Shift Face (a Slovakian citizen who goes by the name of Tom), who has created other entertaining videos using Deepfake technology. For the unaware, Deepfake uses Artificial intelligence and deep neural networks to alter audio or video to pass it off as true or original content. https://www.youtube.com/watch?v=VWrhRBb-1Ig Deepfakes are problematic as they make it hard to differentiate between fake and real videos or images. This gives people the liberty to use deepfakes for promoting harassment and illegal activities. The most common use of deepfakes is found in revenge porn, political abuse, and fake celebrities videos as this one. The top comments on the video clip express dangers of realistic AI manipulation. “The fade between faces is absolutely unnoticeable and it's flipping creepy. Nice job!” “I’m always amazed with new technology, but this is scary.” “Ok, so video evidence in a court of law just lost all credibility” https://twitter.com/TheMuleFactor/status/1160925752004624387 Deepfakes can also be used as a weapon of misinformation since they can be used to maliciously hoax governments, populations and cause internal conflict. Gavin Sheridan, CEO of Vizlegal also tweeted the clip, “Imagine when this is all properly weaponized on top of already fractured and extreme online ecosystems and people stop believing their eyes and ears.” He also talked about future impact. “True videos will be called fake videos, fake videos will be called true videos. People steered towards calling news outlets "fake", will stop believing their own eyes. People who want to believe their own version of reality will have all the videos they need to support it,” he tweeted. He also tweeted whether we would require A-list movie actors at all in the future, and could choose which actor will portray what role. His tweet reads, “Will we need A-list actors in the future when we could just superimpose their faces onto the faces of other actors? Would we know the difference?  And could we not choose at the start of a movie which actors we want to play which roles?” The past year has seen accelerated growth in the use of deepfakes. In June, a fake video of Mark Zuckerberg was posted on Instagram, under the username, bill_posters_uk. In the video, Zuckerberg appears to give a threatening speech about the power of Facebook. Facebook had received strong criticism for promoting fake videos on its platform when in May, the company had refused to remove a doctored video of senior politician Nancy Pelosi. Samsung researchers also released a deepfake that could animate faces with just your voice and a picture using temporal GANs. Post this, the House Intelligence Committee held a hearing to examine the public risks posed by “deepfake” videos. Tom, the creator of the viral video told The Guardian that he doesn't see deepfake videos as the end of the world and hopes his deepfakes will raise public awareness of the technology's potential for misuse. “It’s an arms race; someone is creating deepfakes, someone else is working on other technologies that can detect deepfakes. I don’t really see it as the end of the world like most people do. People need to learn to be more critical. The general public are aware that photos could be Photoshopped, but they have no idea that this could be done with video.” Ctrl Shift Face is also on Patreon offering access to bonus materials, behind the scenes footage, deleted scenes, early access to videos for those who provide him monetary support. Now there is a Deepfake that can animate your face with just your voice and a picture. Mark Zuckerberg just became the target of the world’s first high profile white hat deepfake op. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts.
Read more
  • 0
  • 0
  • 3921

article-image-vulnerabilities-in-the-picture-transfer-protocol-ptp-allows-researchers-to-inject-ransomware-in-canons-dslr-camera
Savia Lobo
13 Aug 2019
5 min read
Save for later

Vulnerabilities in the Picture Transfer Protocol (PTP) allows researchers to inject ransomware in Canon’s DSLR camera

Savia Lobo
13 Aug 2019
5 min read
At the DefCon 27, Eyal Itkin, a vulnerability researcher at Check Point Software Technologies, demonstrated how vulnerabilities in the Picture Transfer Protocol (PTP) allowed him to infect a Canon EOS 80D DSLR with ransomware over a rogue WiFi connection. The PTP along with image transfer also contains dozens of different commands that support anything from taking a live picture to upgrading the camera’s firmware. The researcher chose Canon’s EOS 80D DSLR camera for three major reasons: Canon is the largest DSLR maker, controlling more than 50% of the market. The EOS 80D supports both USB and WiFi. Canon has an extensive “modding” community, called Magic Lantern, an open-source free software add-on that adds new features to the Canon EOS cameras. Eyal Itkin highlighted six vulnerabilities in the PTP that can easily allow a hacker to infiltrate the DSLRs and inject ransomware and lock the device. Next, the users might have to pay ransom to free up their camera and picture files. CVE-2019-5994 – Buffer Overflow in SendObjectInfo  (opcode 0x100C) CVE-2019-5998 – Buffer Overflow in NotifyBtStatus (opcode 0x91F9) CVE-2019-5999– Buffer Overflow in BLERequest (opcode 0x914C) CVE-2019-6000– Buffer Overflow in SendHostInfo (opcode0x91E4) CVE-2019-6001– Buffer Overflow in SetAdapterBatteryReport (opcode 0x91FD) CVE-2019-5995 – Silent malicious firmware update Itkin’s team informed Canon about the vulnerabilities in their DSLR on March 31, 2019. Recently, on August 6, Canon published a security advisory informing users that, “at this point, there have been no confirmed cases of these vulnerabilities being exploited to cause harm” and asking them to take advised measures to ensure safety. Itkin told The Verge, “due to the complexity of the protocol, we do believe that other vendors might be vulnerable as well, however, it depends on their respective implementation”. Though Itkin said he worked only with the Canon model, he also said DSLRs of other companies may also be at high risk. Vulnerability discovery by Itkin’s team in Canon’s DSLR After Itkin’s team was successful in dumping the camera’s firmware and loading it into their disassembler (IDA Pro), they say finding the PTP layer was an easy task. This is because, The PTP layer is command-based, and every command has a unique numeric opcode. The firmware contains many indicative strings, which eases the task of reverse-engineering it. Next, the team traversed back from the PTP OpenSession handler and found the main function that registers all of the PTP handlers according to their opcodes. “When looking on the registration function, we realized that the PTP layer is a promising attack surface. The function registers 148 different handlers, pointing to the fact that the vendor supports many proprietary commands. With almost 150 different commands implemented, the odds of finding a critical vulnerability in one of them is very high,” Itkin wrote in the research report. Each PTP command handler implements the same code API. The API makes use of the ptp_context object, an object that is partially documented thanks to ML, Itkin said. The team realized that most of the commands were relatively simple. “They receive only a few numeric arguments, as the protocol supports up to 5 such arguments for every command. After scanning all of the supported commands, the list of 148 commands was quickly narrowed down to 38 commands that receive an input buffer,” Itkin writes. “From an attacker’s viewpoint, we have full control of this input buffer, and therefore, we can start looking for vulnerabilities in this much smaller set of commands. Luckily for us, the parsing code for each command uses plain C code and is quite straight-forward to analyze,” he further added. Following this, they were able to find their first vulnerabilities and then the rest. Check Point and Canon have advised users to ensure that their cameras are using the latest firmware and install patches whenever they become available. Also, if the device is not in use camera owners should keep the device’s Wi-Fi turned off. A user on HackerNews points out, “It could get even worse if the perpetrator instead of bricking the device decides to install a backdoor that silently uploads photos to a server whenever a wifi connection is established.” Another user on Petapixel explained what quick measures they should take,  “A custom firmware can close the vulnerability also if they put in the work. Just turn off wifi and don't use random computers in grungy cafes to connect to your USB port and you should be fine. It may or may not happen but it leaves the door open for awesome custom firmware to show up. Easy ones are real CLOG for 1dx2. For the 5D4, I would imagine 24fps HDR, higher res 120fps, and free Canon Log for starters. For non tech savvy people that just leave wifi on all the time, that visit high traffic touristy photo landmarks they should update. Especially if they have no interest in custom firmware.” Another user on Petapixel highlighted the fact, “this hack relies on a serious number of things to be in play before it works, there is no mention of how to get the camera working again, is it just a case of flashing the firmware and accepting you may have lost a few images ?... there’s a lot more things to worry about than this.” Check Point has demonstrated the entire attack in the following YouTube video. https://youtu.be/75fVog7MKgg To know more about this news in detail, read Eyal Itkin’s complete research on Check Point. Researchers reveal vulnerability that can bypass payment limits in contactless Visa card Apple patched vulnerability in Mac’s Zoom Client; plans to address ‘video on by default’ VLC media player affected by a major vulnerability in a 3rd library, libebml
Read more
  • 0
  • 0
  • 4047
Banner background image

article-image-opentracing-and-opencensus-merge-into-opentelemetry-project-google-introduces-opencensus-web
Sugandha Lahoti
13 Aug 2019
4 min read
Save for later

OpenTracing and OpenCensus merge into OpenTelemetry project; Google introduces OpenCensus Web

Sugandha Lahoti
13 Aug 2019
4 min read
Google has introduced an extension of OpenCensus called the OpenCensus Web which is a library for collecting application performance and behavior monitoring data of web pages. This library focuses on the frontend web application code that executes in the browser allowing it to collect user-side performance data. It is still in alpha stage with the API subject to change. This is great news for websites that are heavy by nature, such as media-driven pages like Instagram, Facebook, YouTube, and Amazon, and WebApps. OpenCensus Web interacts with three application components, the Frontend web server, the Browser JS, and the OpenCensus Agent. The agent receives traces from the frontend web server proxy endpoint or directly from the browser JS, and exports them to a trace backend. Features of OpenCensus Web OpenCensus Web traces spans for initial load including server-side HTML rendering The OpenCensus Web spans also includes detailed annotations for DOM load events as well as network events It automatically traces all the click events as long as the click is done in a DOM element and it is not disabled OC Web traces route transitions between the different sections of your page by monkey-patching the History API It allows users to create custom spans for their web application for tasks or code involved in user interaction It performs automatic spans for HTTP requests and browser performance data OC web relates user interactions back to the initial page load tracing. Along with this release, the OpenCensus family of projects is merging with OpenTracing into OpenTelemetry. This means all of the OpenCensus community will be moving over to OpenTelemetry, Google and Omnition included. OpenCensus Web’s functionality will be migrated into OpenTelemetry JS once this project is ready. Omnition founder wrote on Hacker News, “Although Google will be heavily involved in both the client libraries and agent development, Omnition, Microsoft, and others will also be major contributors.” Another comment on Hacker News, explains the merger more in detail. “OpenCensus is a Google project to standardize metrics and distributed tracing. It's an API spec and libraries for various languages with varying backend support. OpenTracing is a CNCF project as an API for distributed tracing with a separate project called OpenMetrics for the metrics API. Neither include libraries and rely on the community to provide them.  The industry decided for once that we don't need all this competing work and is consolidating everything into OpenTelemetry that combines an API for tracing and metrics along with libraries. Logs (the 3rd part of observability) are in the planning phase.  OpenCensus Web is bringing the tracing/metrics part to your frontend JS so you can measure how your webapp works in addition to your backend apps and services.” By September 2019, OpenTelemetry plans to reach parity with existing projects for C#, Golang, Java, NodeJS, and Python. When each language reaches parity, the corresponding OpenTracing and OpenCensus projects will be sunset (old projects will be frozen, but the new project will continue to support existing instrumentation for two years, via a backwards compatibility bridge). Read more on the OpenTelemetry roadmap. Public reaction for OpenCensus Web has been positive. People have expressed their opinions on a Hacker News thread. “This is great, as the title says, this means that web applications can now have tracing across the whole stack, all within the same platform.” “I am also glad to know that the merge between OpenTracing and OpenCensus is still going well. I started adding telemetry to the projects I maintain in my current job and so far it has been very helpful to detect not only bottlenecks in the operations but also sudden spikes in the network traffic since we depend on so many 3rd-party web API that we have no control over. Thank you OpenCensus team for providing me with the tools to learn more.” For more information about OpenCensus Web, visit Google’s blog. CNCF Sandbox, the home for evolving cloud-native projects, accepts Google’s OpenMetrics Project Google open sources ClusterFuzz, a scalable fuzzing tool Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard
Read more
  • 0
  • 0
  • 2803

article-image-lukasz-langa-at-pylondinium19-if-python-stays-synonymous-with-cpython-for-too-long-well-be-in-big-trouble
Sugandha Lahoti
13 Aug 2019
7 min read
Save for later

Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”

Sugandha Lahoti
13 Aug 2019
7 min read
PyLondinium, the conference for Python developers was held in London, from the 14th to the 16th of June, 2019. At the Sunday Keynote Łukasz Langa, the creator of Black (Python code formatter) and Python 3.8 release manager spoke on where Python could be in 2020 and how Python developers should try new browser and mobile-friendly versions of Python. Python is an extremely expressive language, says Łukasz. “When I first started I was amazed how much you can accomplish with just a few lines of code especially compared to Java. But there are still languages that are even more expressive and enables even more compact notation.” So what makes Python special? Python is run above pseudocode; it reads like English; it is very elegant. “Our responsibility as developers,” Łukasz mentions “is to make Python’s runnable pseudocode convenient to use for new programmers.” Python has gotten much bigger, stable and more complex in the last decade. However, the most low-hanging fruit, Łukasz says, has already been picked up and what's left is the maintenance of an increasingly fossilizing interpreter and a stunted library. This maintenance is both tedious and tricky especially for a dynamic interpreter language like Python. Python being a community-run project is both a blessing and a curse Łukasz talks about how Python is the biggest community ran programming language on the planet. Other programming languages with similar or larger market penetration are either run by single corporations or multiple committees. Being a community project is both a blessing and a curse for Python, says Łukasz. It's a blessing because it's truly free from shareholder pressure and market swing. It’s a curse because almost the entire core developer team is volunteering their time and effort for free and the Python Software Foundation is graciously funding infrastructure and events; it does not currently employ any core developers. Since there is both Python and software right in the name of the foundation, Lukasz says he wants it to change. “If you don't pay people, you have no influence over what they work on. Core developers often choose problems to tackle based on what inspires them personally. So we never had an explicit roadmap on where Python should go and what problems or developers should focus on,” he adds. Python is no longer governed by a BDFL says Łukasz, “My personal hope is that the steering council will be providing visionary guidance from now on and will present us with an explicit roadmap on where we should go.” Interesting and dead projects in Python Łukasz talked about mypyc and invited people to work and contribute to this project as well as organizations to sponsor it. Mypyc is a compiler that compiles mypy-annotated, statically typed Python modules into CPython C extensions. This restricts the Python language to enable compilation. Mypyc supports a subset of Python. He also mentioned MicroPython, which is a Kickstarter-funded subset of Python optimized to run on microcontrollers and other constrained environments. It is a compatible runtime for microcontrollers that has very little memory- 16 kilobytes of RAM and 256 kilobytes for code memory and minimal computing power. He also talks about micro:bit. He also mentions many dead/dying/defunct projects for alternative Python interpreters, including Unladen Swallow, Pyston, IronPython. He talked about PyPy - the JIT Python compiler written in Python. Łukasz mentions that since it is written in Python 2, it makes it the most complex applications written in the industry. “This is at risk at the moment,” says Łukasz “since it’s a large Python 2 codebase needs updating to Python 3. Without a tremendous investment, it is very unlikely to ever migrate to Python 3.” Also, trying to replicate CPython quirks and bugs requires a lot of effort. Python should be aligned with where developer trends are shifting Łukasz believes that a stronger division between language and the reference implementation is important in case of Python. He declared, “If Python stays synonymous with CPython for too long, we’ll be in big trouble.” This is because CPython is not available where developer trends are shifting. For the web, the lingua franca is JavaScript now. For the two biggest operating systems on mobile, there is Swift the modern take on Objective C and Kotlin, the modern take on Java. For VR AR and 3D games, there is C# provided by Unity. While Python is growing fast, it’s not winning ground in two big areas: the browser, and mobile. Python is also slowly losing ground in the field of systems orchestration where Go is gaining traction. He adds, “if there were not the rise of machine learning and artificial intelligence, Python would have not survived the transition between Python 2 and Python 3.” Łukasz mentions how providing a clear supported and official option for the client-side web is what Python needs in order to satisfy the legion of people that want to use it.  He says, “for Python, the programming language to need to reach new heights we need a new kind of Python. One that caters to where developer trends are shifting - mobile, web, VR, AR, and 3D games. There should be more projects experimenting with Python for these platforms. This especially means trying restricted versions of the language because they are easier to optimize. We need a Python compiler for Web and Python on Mobile Łukasz talked about the need to shift to where developer trends are shifting. He says we need a Python compiler for the web - something that compiles your Python code to the web platform directly. He also adds, that to be viable for professional production use, Python on the web must not be orders of magnitude slower than the default option (Javascript) which is already better supported and has better documentation and training. Similarly, for mobile he wants a small Python application so that websites run fast and have quick user interactions. He gives the example of the Go programming language stating how “one of Go’s claims to fame is the fact that they shipped static binaries so you only have one file. You can choose to still use containers but it’s not necessary; you don't have virtual ends, you don't have pip installs, and you don't have environments that you have to orchestrate.” Łukasz further adds how the areas of modern focus where Python currently has no penetration don't require full compatibility with CPython. Starting out with a familiar subset of Python for the user that looks like Python would simplify the development of a new runtime or compiler a lot and potentially would even fit the target platform better. What if I want to work on CPython? Łukasz says that developers can still work on CPython if they want to. “I'm not saying that CPython is a dead end; it will forever be an important runtime for Python. New people are still both welcome and needed in fact. However, working on CPython today is different from working on it ten years ago; the runtime is mission-critical in many industries which is why developers must be extremely careful.” Łukasz sums his talk by declaring, “I strongly believe that enabling Python on new platforms is an important job. I'm not saying Python as the entire programming language should just abandon what it is now. I would prefer for us to be able to keep Python exactly as it is and just move it to all new platforms. Albeit, it is not possible without multi-million dollar investments over many years.” The talk was well appreciated by Twitter users with people lauding it as ‘fantastic’ and ‘enlightening’. https://twitter.com/WillingCarol/status/1156411772472971264 https://twitter.com/freakboy3742/status/1156365742435995648 https://twitter.com/jezdez/status/1156584209366081536 You can watch the full Keynote on YouTube. NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust
Read more
  • 0
  • 0
  • 3736
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-at-defcon-27-darpas-10-million-voting-system-could-not-be-hacked-by-voting-village-hackers-due-to-a-bug
Savia Lobo
12 Aug 2019
4 min read
Save for later

At DefCon 27, DARPA's $10 million voting system could not be hacked by Voting Village hackers due to a bug

Savia Lobo
12 Aug 2019
4 min read
At the DefCon security conference in Las Vegas, for the last two years, hackers have come to the Voting Village every year to scrutinize voting machines and analyze them for vulnerabilities. This year, at DefCon 27, the targeted voting machine included a $10 million project by DARPA (Defense Advanced Research Projects Agency). However, hackers were unable to break into the system, not because of robust security features, but due to technical difficulties during the setup. “A bug in the machines didn't allow hackers to access their systems over the first two days,” CNet reports. DARPA announced this voting system in March, this year, hoping that it “will be impervious to hacking”. The system will be designed by the Oregon-based verifiable systems firm, Galois. “The agency hopes to use voting machines as a model system for developing a secure hardware platform—meaning that the group is designing all the chips that go into a computer from the ground up, and isn’t using proprietary components from companies like Intel or AMD,” Wired reports. Linton Salmon, the project’s program manager at Darpa says, “The goal of the program is to develop these tools to provide security against hardware vulnerabilities. Our goal is to protect against remote attacks.” Voting Village's co-founder Harri Hursti said, the five machines brought in by Galois, “seemed to have had a myriad of different kinds of problems. Unfortunately, when you're pushing the envelope on technology, these kinds of things happen." “The Darpa machines are prototypes, currently running on virtualized versions of the hardware platforms they will eventually use.” However, at Voting Village 2020, Darpa plans to include complete systems for hackers to access. Dan Zimmerman, principal researcher at Galois said, “All of this is here for people to poke at. I don’t think anyone has found any bugs or issues yet, but we want people to find things. We’re going to make a small board solely for the purpose of letting people test the secure hardware in their homes and classrooms and we’ll release that.” Sen. Wyden says if voting system security standards fail to change, the consequences will be much worse than 2016 elections After the cyberattacks in the 2016 U.S. presidential elections, there is a higher risk of securing voters data in the upcoming presidential elections next year. Senator Ron Wyden said if the voting system security standards fail to change, the consequences could be far worse than the 2016 elections. In his speech on Friday at the Voting Village, Wyden said, "If nothing happens, the kind of interference we will see form hostile foreign actors will make 2016 look like child's play. We're just not prepared, not even close, to stop it." Wyden proposed an election security bill requiring paper ballots in 2018. However, the bill was blocked in the Senate by Majority Leader Mitch McConnell who called the bill a partisan legislation. On Friday, a furious Wyden held McConnell responsible calling him the reason why Congress hasn't been able to fix election security issues. "It sure seems like Russia's No. 1 ally in compromising American election security is Mitch McConnell," Wyden said. https://twitter.com/ericgeller/status/1159929940533321728 According to a security researcher, the voting system has a terrible software vulnerability Dan Wallach, a security researcher at Rice University in Houston, Texas told Wired, “There’s a terrible software vulnerability in there. I know because I wrote it. It’s a web server that anyone can connect to and read/write arbitrary memory. That’s so bad. But the idea is that even with that in there, an attacker still won’t be able to get to things like crypto keys or anything really. All they would be able to do right now is crash the system.” According to CNet, “While the voting process worked, the machines weren't able to connect with external devices, which hackers would need in order to test for vulnerabilities. One machine couldn't connect to any networks, while another had a test suite that didn't run, and a third machine couldn't get online.” The machine's prototype allows people to vote with a touchscreen, print out their ballot and insert it into the verification machine, which ensures that votes are valid through a security scan. According to Wired, Galois even added vulnerabilities on purpose to see how its system defended against flaws. https://twitter.com/VotingVillageDC/status/1160663776884154369 To know more about this news in detail, head over to Wired report. DARPA plans to develop a communication platform similar to WhatsApp DARPA’s $2 Billion ‘AI Next’ campaign includes a Next-Generation Nonsurgical Neurotechnology (N3) program Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more!
Read more
  • 0
  • 0
  • 2762

article-image-how-data-privacy-awareness-is-changing-how-companies-do-business
Guest Contributor
09 Aug 2019
7 min read
Save for later

How Data Privacy awareness is changing how companies do business

Guest Contributor
09 Aug 2019
7 min read
Not so long ago, data privacy was a relatively small part of business operations at some companies. They paid attention to it to a minor degree, but it was not a focal point or prime area of concern. That's all changing now as businesses now recognize that failing to take privacy seriously harms the bottom line. That revelation changes how they operate and engage with customers. One of the reasons for this change is the General Data Protection Regulation (GDPR) rule which now affects all European Union companies and those that do business with EU residents. Some analysts viewed regulators as slow to begin enforcing GDPR with fines, but some of them imposed in 2019 total more than $100 million. In 2018, Twitter and Nielsen cited the GDPR as a reason for their falling share prices. No Single Way to Demonstrate Data Privacy Awareness One essential thing for companies to keep in mind is that there is not an all-encompassing way to show customers they emphasize data security. Although security and privacy are distinct, they are closely related to and impact each other. That's because what privacy awareness means differs depending on how a business operates. For example, a business might collect data from customers and feed it back to them through an analytics platform. In this case, showing data privacy awareness might mean publishing a policy that mentions how the company will never sell a person's information to others. For an e-commerce company, emphasizing on a commitment to keep customer information secure might mean going into details about how it protects sensitive data such as credit card numbers. It might also talk about internal strategies used to keep customer information as safe as possible from cybercriminals. One universal aspect of data privacy awareness is that it makes good business sense. The public is now much more aware of data privacy issues than in past years, and that's largely due to the high-profile breaches that capture the headlines. Lost customers, gigantic fines and damaged reputations after Data breaches and misuse When companies don't invest in data privacy measures, they could be victimized by severe data breaches. If that happens,  ramifications are often substantial. A 2019 study from PCI Pal surveyed customers in the United States and the United Kingdom to determine how their perceptions and spending habits changed following data breaches. It found that 41% of United Kingdom customers and 21% of people in the U.S. stop spending money at business forever if it suffers a data breach. The more common action is for consumers to stop spending money at breached businesses for several months afterward, the poll revealed. In total, 62% of Americans and 44% of Brits said they’d take that approach. However, that's not the only potential hit to a company's profitability. As the Facebook example mentioned earlier indicates, there can also be massive fines. Two other recent examples involve the British Airways and Marriott Hotels breaches. A data regulatory body in the United Kingdom imposed the largest-ever data breach fine on British Airways after a 2018 hack, with the penalty totaling £183 million — more than $228 million. Then, that same authority gave Marriott Hotels the equivalent of a $125 million fine for its incident, alleging inadequate cybersecurity and data privacy due diligence. These enormous fines don't only happen in the United Kingdom. Besides its recent decision with Facebook, the U.S. Federal Trade Commission (FTC) reached a settlement with Equifax that required the company to pay $700 million after its now-infamous data breach. It's easy to see why losing customers after such issues could make such substantial fines even more painful for the companies that have to pay them. The FTC also investigated Facebook’s Cambridge Analytica scandal and handed the company a $5 billion fine for failing to adequately protect customer data — the largest imposed by the FTC. Problems also occur if companies misuse data. Take the example of a class-action lawsuit filed against AT&T. The telecom giant and a couple of data aggregation enterprises allegedly permitted third-party companies to access individuals' real-time locations via mobile phone data. Those companies didn't check first to see if the customers allowed such access. Such news could bring about irreparable reputational damage and make people hesitate to do business. Expecting customers to read privacy policies is not sufficient Companies rely on both back-end and customer-facing strategies to meet their data security goals and earn customer trust. Some businesses go beyond the norm by taking the time to publish sections on their websites that detail how their infrastructure supports data privacy. They discuss the implementation of things like multi-layered data access authorization framework, physical access controls for server rooms and data encryption at rest and in transit. But, one of the more prominent customer-facing declarations of a company’s commitment to keeping data secure is the privacy policy, now a fixture of modern websites. Companies cannot bypass publishing their privacy policies, of course. However, most people don't take the time to read those documents. An Axios/Survey Monkey poll spotlighted a disconnect between respondents' beliefs and actions. It found that although 87% of them felt it was either somewhat or very important to understand a company's privacy policy before signing up for something, 56% of them always or usually agree to it without reading it. More research on the subject by Varonis found that it can take nearly half an hour to read some privacy policies. That reading level got more advanced after the GDPR came into effect. Together, these studies illustrate that companies need to go beyond anticipating that customers will read what privacy policies say. Moreover, they should work hard to make them shorter and easier for people to understand. Most people want companies to take a stand for Data Privacy A study of 1,000 people conducted in the United Kingdom supported the earlier finding from Gemalto where people thought the companies holding their data were responsible for maintaining its security. It concluded that customers felt it was "highly important" for businesses to take a stand for information security and privacy, and that 53% expected firms to do so. Moreover, the results of a CIGI-Ipsos worldwide survey said that 53% of those polled were more concerned about online privacy now compared to a year ago. Additionally, 49% said their rising distrust of the internet made them provide less information online. Companies must show they care about data privacy and work that aspect into their business strategies. Otherwise, they could find that customers leave them in favor of more privacy-centric organizations. To get an idea of what can happen when companies have data privacy blunders, people only need to look at how Facebook users responded in the Cambridge Analytica aftermath. Statistics published by the Pew Research Center showed that 54% of adults changed their privacy settings in the past year, while approximately a quarter stopped using the site. After the news broke about Facebook and Cambridge Analytica, many media outlets reminded people that they could download all the data Facebook had about them. The Pew Research Center found that although only 9% of its respondents took that step, 47% of the people in that group removed the app from their phones. Data Privacy is a Top-of-Mind concern The studies and examples mentioned here strongly suggest consumers are no longer willing to accept the possible wrongful treatment of their data. They increasingly hold companies accountable and don't show forgiveness if they don't meet their privacy expectations. The most forward-thinking companies see this change and respond accordingly. Those that choose inaction instead are at risk of losing out. Individuals understand that companies value their data, but they aren't willing to part with it freely unless companies convey trustworthiness first. Author Bio Kayla Matthews writes about big data, cybersecurity, and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com. Facebook fails to block ECJ data security case from proceeding ICO to fine Marriott over $124 million for compromising 383 million users’ data. Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content
Read more
  • 0
  • 0
  • 2777

article-image-black-hat-usa-2019-conference-highlights-ibms-warshipping-os-threat-intelligence-bots-apples-1m-bug-bounty-programs-and-much-more
Savia Lobo
09 Aug 2019
9 min read
Save for later

Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more!

Savia Lobo
09 Aug 2019
9 min read
The popular Black Hat USA 2019 conference was held from August 3 - August 8 at Las Vegas. The conference included technical training sessions conducted by international industry and subject matter experts to provide hands-on offensive and defensive skill-building opportunities. It also included briefings from security experts who shared their latest findings, open-source tools, zero-day exploits, and more. Tech giants including Apple, IBM, Microsoft made some interesting announcements such as Apple and Microsoft expanding their bug-bounty programs, with IBM launching a new ‘warshipping’ hack, and much more. Black Hat USA 2019 also launched many interesting open-source tools and products like Scapy, a Python-based Interactive packet manipulation Program, CyBot, an open-Source threat intelligence chatbot, any many other products. Apple, IBM, and Microsoft announcements at Black Hat USA 2019 Apple expands its bug bounty program; announces new iOS ‘security research device program’ Ivan Krstić, Apple’s head of security engineering, announced that Apple is expanding its bug bounty program by making it available for all security researchers in general. Previously, the bug bounty program was open only for those on the company’s invite-only list and the reward prize was $200,000. Following this announcement, a reward up to $1 million will be awarded to those who find vulnerabilities in Apple’s iPhones and Macs. Krstić also said that next year, Apple will be providing special iPhones to security researchers to help them find security flaws in iOS. To know more about this news in detail, head over to our complete coverage. IBM’s X-Force Red team announces new ‘warshipping’ hack to infiltrate corporate networks IBM’s offensive security team, X-Force Red announced a new attack technique nicknamed "warshipping". According to Forbes, “When you cruise a neighborhood scouting for Wi-Fi networks, warshipping allows a hacker to remotely infiltrate corporate networks by simply hiding inside a package a remote-controlled scanning device designed to penetrate the wireless network–of a company or the CEO's home–and report back to the sender.” Charles Henderson, head of IBM X-Force Red said, “Think of the volume of boxes moving through a corporate mailroom daily. Or consider the packages dropped off on the porch of a CEO’s home, sitting within range of their home Wi-Fi. Using warshipping, X-Force Red was able to infiltrate corporate networks undetected.” To demonstrate this approach, the X-Force team built a low-power gizmo consisting of a $100 single-board computer with built-in 3G and Wi-Fi connectivity and GPS. It’s smaller than the palm of your hand, and can be hidden in a package sent out for delivery to a target’s business or home. To know more about this announcement, head over to Forbes. Microsoft adds $300,000 to its Azure bounty program For anyone who can successfully hack Microsoft’s public-cloud infrastructure service, the company has increased the bug bounty reward by adding $300,000. Kymberlee Price, a Microsoft security manager, said, “To make it easier for security researchers to confidently and aggressively test Azure, we are inviting a select group of talented individuals to come and do their worst to emulate criminal hackers.” Further to avoid causing any disruptions to its corporate customers, Microsoft has also set up a dedicated customer-safe cloud environment, Azure Security Lab, which is a set of dedicated cloud hosts— similar to a sandbox environment and totally isolated from Azure customers—for security researchers to test attacks against Microsoft’s cloud infrastructure. To know more about this announcement in detail, head over to Microsoft’s official post. Some open-source tools and products launched at Black Hat USA 2019 Scapy: Python-Based Interactive Packet Manipulation Program + Library Scapy is a powerful Python-based interactive packet manipulation program and library. Scapy can be used to forge or decode packets of a wide number of protocols and send them on the wire, capture them, store or read them using pcap files, match requests and replies, and much more. Scapy can easily handle most classical tasks like scanning, tracerouting, probing, unit tests, attacks or network discovery. It also performs well at a lot of other specific tasks that most other tools can't handle, like sending invalid frames, injecting your own 802.11 frames, combining techniques (VLAN hopping+ARP cache poisoning, VoIP decoding on WEP protected channel, ...), etc. CyBot: Open-Source Threat Intelligence Chat Bot The goal to create Cybot was “to create a repeatable process using a completely free open source framework, an inexpensive Raspberry Pi (or even virtual machine), and host a community-driven plugin framework to open up the world of threat intel chatbots to everyone from the average home user to the largest security operations center”, the speaker Tony Lee, highlights. Cybot first debuted at Black Hat Arsenal Vegas 2017 and was also taken to Black Hat Europe and Asia to gather more great feedback and ideas from an enthusiastic international crowd. The feedback helped researchers to enhance and provide a platform upgrade to Cybot. Now, you can build your own Cybot within an hour with anywhere from  $0-$35 in expenses. Azucar: Multi-Threaded Plugin-Based Tool to Help Assess the Security of Azure Cloud Environment Subscription Azucar is a multi-threaded plugin-based tool to help assess the security of Azure Cloud environment subscription. By leveraging the Azure API, Azucar automatically gathers a variety of configuration data and analyses all data relating to a particular subscription in order to determine security risks. EXPLIoT: IoT Security Testing and Exploitation Framework EXPLIoT, developed in Python 3, is a framework for security testing and exploiting IoT products and IoT infrastructure. It includes a set of plugins (test cases) which are used to perform the assessment and can be extended easily with new ones. It can be used as a standalone tool for IoT security testing and more interestingly, it provides building blocks for writing new plugins/exploits and other IoT security assessment test cases with ease. EXPLIoT supports most IoT communication protocols, hardware interfacing functionality and test cases that can be used from within the framework to quickly map and exploit an IoT product or IoT Infrastructure. PyRDP: Python 3 Remote Desktop Protocol Man-in-the-Middle (MITM) and Library PyRDP is an RDP man-in-the-middle tool that has applications in pentesting and malware research. In pentesting, PyRDP has a number of features that allow attackers to compromise RDP sessions when combined with TCP man-in-the-middle solutions. On the malware research side, PyRDP can be used as part of a fully interactive honeypot. It can be placed in front of a Windows RDP server to intercept malicious sessions. It has the ability to replace the credentials provided in the connection sequence with working credentials to accelerate compromise and malicious behavior collection. MoP: Master of Puppets - Open Source Super Scalable Advanced Malware Tracking Framework for Reverse Engineers MoP ("Master of Puppets") is an open-source framework for reverse engineers who want to create and operate trackers for new malware found for research. MoP ships with a variety of workstation simulation capabilities, such as fake filesystem manager and fake process manager, multi-worker orchestration, TOR integration and more, all aiming to deceive adversaries into interacting with a simulated environment and possibly drop new unique samples. “Since everything is done in pure python, no virtual machines or Docker containers are needed and no actual malicious code is executed, all of which enables us to scale up in a click of a button, connecting to potentially thousands of different malicious servers at once from a single instance running on a single laptop.” Commando VM 2.0: Security Distribution for Penetration Testers and Red Teamers Commando VM is an open-source Windows-based security distribution designed for Penetration Testers and Red Teamers. It is an add-on from FireEye's very successful Reverse Engineering distribution: FLARE VM. Similar to Kali Linux, Commando VM is designed with an arsenal of open-source offensive tools that will help operators achieve assessment objectives. Built on Windows, Commando VM comes with all the native support for accessing Active Directory environments. Commando VM also includes: Web application assessment tools Scripting languages (such as Python and Go) Information Gathering tools (such as Nmap, WireShark, and PowerView) Exploitation Tools (such as PowerSploit, GhostPack and Mimikatz) Persistence tools, Lateral Movement tools, Evasion tools, Post-Exploitation tools (such as FireEye's SessionGopher), Remote Access tools, Command-Line tools, and all the might of FLARE VM's reversing tools. Commando VM 1.0 debuted at Black Hat Asia in Singapore this year and less than two weeks after release its “GitHub repository had over 2000 followers and over 400 forks”. BLACKPHENIX: Malware Analysis + Automation Framework BLACKPHENIX framework performs an Intelligent automation and analysis by combining all the known malware analysis approaches, automating the time-consuming stages and counter-attacking malware behavioral patterns. The objective of this framework is to generate precise IOCs by revealing the real malware purpose and exposing its hidden data and related functionalities that are used to exfiltrate or compromise user information. This framework focuses on consolidating, correlating, and cross-referencing the data collected between analysis stages by the execution of Python scripts and helper modules, providing full synchronization between the debugger, disassembler, and supporting components. AutoMacTC: Finding Worms in Apple Orchards - Using AutoMacTC for macOS Incident Response AutoMacTC is an open-source Python framework that can be quickly deployed to gather forensic data on macOS devices, from the artifacts that matter most to you and your investigation. The speakers Kshitij Kumar and Jai Musunuri say, “Performing forensic imaging and deep-dive analysis can be incredibly time-consuming and induce data fatigue in analysts, who may only need a select number of artifacts to identify leads and start finding answers. The resources-to-payoff ratio is impractical.” AutoMacTC captures sufficient data into a singular location, equipping responders with all of the above. To know about other open-source products in detail, head over to the Arsenal section. Black Hat USA 2019 also hosted a number of training sessions for cybersecurity developers, pentesters, and other security enthusiasts. To know more about the entire conference in detail, head over to Black Hat USA 2019 official website. Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Apple plans to suspend Siri response grading process due to privacy issues Apple Card, iPhone’s new payment system, is now available for select users
Read more
  • 0
  • 0
  • 3190

article-image-cncf-led-open-source-kubernetes-security-audit-reveals-37-flaws-in-kubernetes-cluster-recommendations-proposed
Vincy Davis
09 Aug 2019
7 min read
Save for later

CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed

Vincy Davis
09 Aug 2019
7 min read
Last year, the Cloud Native Computing Foundation (CNCF) initiated a process of conducting third-party security audits for its own projects. The aim of these security audits was to improve the overall security of the CNCF ecosystem. CoreDNS, Envoy and Prometheus are some of the CNCF projects which underwent these audits, resulting in identification of several security issues and vulnerabilities in the projects. With the help of the audit results, CoreDNS, Envoy and Prometheus addressed their security issues and later, provided users with documentation for the same. CNCF CTO Chris Aniszczyk says “The main takeaway from these initial audits is that a public security audit is a great way to test the quality of an open source project along with its vulnerability management process and more importantly, how resilient the open source project’s security practices are.” He has also announced that, later this year, CNCF will initiate a bounty program for researchers who identify bugs and other cybersecurity shortcomings in their projects. After tasting initial success, CNCF formed a Security Audit Working Group to provide security audits to their graduated projects, using the funds provided by the CNCF community. CNCF’s graduated projects include Kubernetes, Envoy, Fluentd among others. Due to the complexity and wide scope of the project, the Working group appointed two firms called the Trail of Bits and Atredis Partners to perform Kubernetes security audits. Trail of Bits implements high-end security research to identify security vulnerabilities and reduce risk and strengthen the code. Similarly, Atredis Partners also does complex and research-driven security testing and consulting. Kubernetes security audit findings Three days ago, the Trail of Bits team released an assessment report called the Kubernetes Security Whitepaper, which includes all the key aspects of the Kubernetes attack surface and security architecture. It aims to empower administrators, operators, and developers to make better design and implementation decisions. The Security Whitepaper presents a list of potential threats to Kubernetes cluster. https://twitter.com/Atlas_Hugged/status/1158767960640479232 Kubernetes cluster vulnerabilities A Kubernetes cluster consists of several base components such as kubelet, kube-apiserver, kube-scheduler, kube-controller-manager, and a kube-apiserver storage backend. Components like controllers and schedulers in Kubernetes assist in networking, scheduling, or environment management. Once a base Kubernetes cluster is configured, the Kubernetes clusters are managed by operator-defined objects. These operator-defined objects are referred as abstractions, which represents the state of the Kubernetes cluster. To provide an easy way of configuration and portability, the abstractions also include the component-agnostic. This again increases the operational complexity of a Kubernetes cluster. Since Kubernetes is a large system with many functionalities, the security audit was conducted on selected eight components within the larger Kubernetes ecosystem: Kube-apiserver Etcd Kube-scheduler Kube-controller-manager Cloud-controller-manager Kubelet Kube-proxy Container Runtime The Trail of Bits team firstly identified three types of attackers within a Kubernetes cluster: External attackers (who did not have access to the cluster) Internal attackers (who had transited a trust boundary) Malicious Internal users (who abuse their privilege within the cluster) The security audits resulted in total 37 findings, including 5 high severity, 17 medium severity, 8  low severity and 7 informational in the access control, authentication, timing, and data validation of a Kubernetes cluster. Some of the findings include: Insecure TLS is in use by default Credentials are exposed in environment variables and command-line arguments Names of secrets are leaked in logs No certificate revocation seccomp is not enabled by default Recommendations for Kubernetes cluster administrators and developers The Trail of Bits team have proposed a list of best practices and guideline recommendations for cluster administrators and developers. Recommendations for cluster administrators Attribute Based Access Controls vs Role Based Access Controls: Role-Based Access Controls (RBAC) can be configured dynamically while a cluster is operational. In contrast, Attribute Based Access Control (ABAC) are static in nature. This increases the difficulty of ensuring proper deployment and enforcement of controls. RBAC best practices: Administrators are advised to test their RBAC policies to ensure that the policies defined on the cluster are backed by an appropriate component configuration and that the policies properly restrict behavior. Node-host configurations and permissions: Appropriate authentication and access controls should be in place for the cluster nodes as an attacker with network access can use Kubernetes components to compromise other nodes. Default settings and backwards compatibility: Kubernetes contains many default settings which negatively impact the security of a cluster. Hence, cluster operators and administrators must ensure that the component and workload settings are rapidly changed and redeployed, in case of a compromise or an update. Networking: Due to the complexity of Kubernetes networking, there are many recommendations for maintaining a secure network. Some of them include: proper segmentation, isolation rules of the underlying cluster hosts should be defined. An executing control-plane components host should be isolated to the greatest extent possible. Environment considerations: The security of a cluster’s operating environment should be addressed. If a cluster is hosted on a cloud provider, administrators should ensure that best-practice hardening rules are implemented. Logging and alerting: Centralized logging of both workload and cluster host logs is recommended to enable debugging and event reconstruction. Recommendations for developers Avoid hardcoding paths to dependencies: Developers are advised to be conservative and cautious when handling external paths. Users should be warned if a path was not found, and have an option to specify it through a configuration variable. File permissions checking: Kubernetes should provide users the ability to perform file permissions checking, and enable this feature by default. This will help prevent common file permissions misconfigurations and help promote more secure practices. Monitoring processes on Linux: A Linux process is uniquely identified in the user-space via a process identifier or PID. A PID will point to a given process as long as the process is alive. If it dies, the PID can be reused by another spawned process. Moving processes to a cgroup: While moving a given process to a less restricted cgroup, it is necessary to validate that the process is the correct process after performing the movement. Future cgroup considerations for Kubernetes: Both Kubernetes and the components it uses (runc, Docker) have no support for cgroups. Currently, it is not an issue, however, it would be good to track this topic as it might change in the future. Future process handling considerations for Kubernetes: Tracking and participating in the development of a processes (or threads) on Linux is highly recommended. Kubernetes security audit sets precedent for other open source projects By conducting security audits and open sourcing the findings, Kubernetes, a widely used container-orchestration system, is setting a great precedent to other projects. This shows Kubernetes’ interest in maintaining security in their ecosystem. Though the number of security flaws found in the audit may upset a Kubernetes developer, it surely assures them that Kubernetes is trying its best to stay ahead of potential attackers. The Security Whitepaper and the threat model, provided in the security audit is expected to be of great help to Kubernetes community members for future references. Developers have also appreciated CNCF for undertaking great efforts in securing the Kubernetes system. https://twitter.com/thekonginc/status/1159578833768501248 https://twitter.com/krisnova/status/1159656574584930304 https://twitter.com/zacharyschafer/status/1159658866931589125 To know more details about the security audit of Kubernetes, check out the Kubernetes Security Whitepaper. Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more! Introducing Ballista, a distributed compute platform based on Kubernetes and Rust CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure
Read more
  • 0
  • 0
  • 4100
article-image-julia-co-creator-jeff-bezanson-on-whats-wrong-with-julialang-and-how-to-tackle-issues-like-modularity-and-extension
Vincy Davis
08 Aug 2019
5 min read
Save for later

Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension

Vincy Davis
08 Aug 2019
5 min read
The Julia language, which has been touted as the new fastest-growing programming language, held its 6th Annual JuliaCon 2019, between July 22nd to 26th at Baltimore, USA. On the fourth day of the conference, the co-creator of Julia language and the co-founder of Julia computing, Jeff Bezanson, gave a talk explaining “What’s bad about Julia”. Firstly, Bezanson states a disclaimer that he’s mentioning only those bad things in Julia which he is currently aware of. Next, he begins by listing many popular issues with the programming language. What’s wrong with Julia Compiler latency: Compiler latency has been one of the high priority issues in Julia. It is a lot slower when compared to other languages like Python(~27x slower) or C( ~187x slower). Static compilation support: Of Course, Julia can be compiled. Unlike the language C which is compiled before execution, Julia is compiled at runtime. Thus Julia provides poor support for static compilation. Immutable arrays: Many developers have contributed immutable array packages, however,  many of these packages assume mutability by default, resulting in more work for users. Thus Julia users have been requesting better support for immutable arrays. Mutation issues: This is a common stumbling block for Julia developers as many complain that it is difficult to identify which package is safe to mutate. Array optimizations: To get good performance, Julia users have to use manually in-place operations to get high performance array code. Better traits: Users have been requesting more traits in Julia, to avoid the big unions of listing all the examples of a type, instead of adding a declaration. This has been a big issue in array code and linear algebra. Incomplete notations: Many codes in Julia have incomplete notations. For eg. N-d array Many members from the audience agreed with Bezanson’s list and appreciated his frank efforts in accepting the problems in Julia. In this talk, Bezanson opts to explore two not-so-popular Julia issues - modularity and extension. He says that these issues are weird and worrisome to even him. How to tackle modularity and extension issues in Julia A typical Julia module extends functions from another module. This helps users in composing many things and getting lots of new functionality for free. However, what if a user wants a separately compiled module, which would be completely sealed, predictable, and will need less  time to compile, like an isolated module. Bezanson starts illustrating how the two issues of modularity and extension can be avoided in Julia code. Firstly, he starts by using two unrelated packages, which can communicate to each other by using extension in another base package. This scenario, he states, is common when used in a core module, which requires few primitives like any type, int type, and others. The two packages in a core module are called Core.Compiler and base, with each having their own definitions. The two packages have some codes which are common among them, thus it requires the user to write the same code twice in both the packages, which Bezanson think is “fine”. The more intense problem, Bezanson says is the typeof present in the core module. As both these packages needs to define constructors for their own types, it is not possible to share these constructors. This means that, except for constructors, everything else is isolated among the two packages. He adds that, “In practice, it doesn’t really matter because the types are different, so they can be distinguished just fine, but I find it annoying that we can’t sort of isolate those method tables of the constructors. I find it kind of unsatisfying that there’s just this one exception.” Bezanson then explains how Types can be described using different representations and extensions. Later, Bezanson provides two rules to tackle method specificity issues in Julia. The first rule is to be more specific, i.e., if it is a strict subtype (<:,not==) of another signature. According to Bezanson, the second rule is that it cannot be avoided. If methods overlap in arguments and have no specificity relationship, then “users have to give an ambiguity error”. Bezanson says that thus users can be on the safer side and assume that things do overlap. Also, if two signatures are similar, “then it does not matter which signature is called”, adds Bezanson. Finally, after explaining all the workarounds with regard to the said issues, Bezanson concludes that “Julia is not that bad”. And states that the “Julia language could be alot better and the team is trying their best to tackle all the issues.” Watch the video below to check out all the illustrations demonstrated by Bezanson during his talk. https://www.youtube.com/watch?v=TPuJsgyu87U Julia users around the world have loved Bezanson’s honest and frank talk at the JuliaCon 2019. https://twitter.com/MoseGiordano/status/1154371462205231109 https://twitter.com/johnmyleswhite/status/1154726738292891648 Read More Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment Creating a basic Julia project for loading and saving data [Tutorial]
Read more
  • 0
  • 0
  • 6219

article-image-what-is-a-magecart-attack-and-how-can-you-protect-your-business
Guest Contributor
07 Aug 2019
5 min read
Save for later

What is a Magecart attack, and how can you protect your business?

Guest Contributor
07 Aug 2019
5 min read
Recently, British Airways was slapped with a $230M fine after attackers stole data from hundreds of thousands of its customers in a massive breach. The fine, the result of a GDPR prosecution, was issued after a 2018 Magecart attack. Attackers were able to insert around 22 lines of code into the airline’s website, allowing them to capture customer credit card numbers and other sensitive pieces of information. Magecart attacks have largely gone unnoticed within the security world in recent years in spite of the fact that the majority occur at eCommerce companies or other similar businesses that collect credit card information from customers. Magecart has also been responsible for significant damage, theft, and fraud across a variety of industries. According to a 2018 report conducted by RiskIQ and Flashpoint, at least 6,400 websites had been affected by Magecart as of November 2018. To safeguard against Magecart and protect your organization from web-based threats, there are a few things you should do: Understand how Magecart attacks happen There are two approaches hackers take when it comes to Magecart attacks; the first focuses on attacking the main website or application, while the second focuses on exploiting third-party tags. In both cases, the intent is to insert malicious JavaScript which can then skim data from HTML forms and send that data to servers controlled by the attackers. Users typically enter personal data — whether it’s for authentication, searching for information, or checking out with a credit card — into a website through an HTML form. Magecart attacks utilize JavaScript to monitor for this kind of sensitive data when it’s entered into specific form fields, such as a password, social security number, or a credit card number. They then make a copy of it and send the copy to a different server on the internet.  In the British Airways attack, for example, hackers inserted malicious code into the airline’s baggage claim subdomain, which appears to have been less secure than the main website. This code was referenced on the main website, which when run within the airline’s customers’ browsers, could skim credit card and other personal information. Get ahead of the confusion that surrounds the attacks Magecart attacks are very difficult for web teams to identify because they do not take place on the provider’s backend infrastructure, but instead within the visitor’s browser. This means data is transferred directly from the browser to malicious servers, without any interaction with the backend website server — the origin — needing to take place. As a result, auditing the backend infrastructure and code supporting website on a regular basis won’t stop attacks, because the issue is happening in the user’s browser which traditional auditing won't detect.  This means Magecart attacks can only be discovered when the company is alerted to credit card fraud or a client-side code review including all the third-party services takes place. Because of this, there are still many sites online today that hold malicious Magecart JavaScript within their pages leaking sensitive information. Restrict access to sensitive data There are a number of things your team can do to prevent Magecart attacks from threatening your website visitors. First, it’s beneficial if your team limits third-party code on sensitive pages. People tend to add third-party tags all over their websites, but consider if you really need that kind of functionality on high-security pages (like your checkout or login pages). Removing non-essential third-party tags like chat widgets and site surveys from sensitive pages limit your exposure to potentially malicious code.  Next, you should consider implementing content security policies (CSP). Web teams can build policies that dictate which domains can run code and send data on sensitive pages. While this approach requires ongoing maintenance, it’s one way to prevent malicious hackers from gaining access to visitors’ sensitive information. Another approach is to adopt a zero-trust strategy. Web teams can look to a third-party security service that allows creating a policy that, by default, blocks access to sensitive data entered in web forms or stored in cookies. Then the team should restrict access to this data to everyone except for a select set of vetted scripts. With these policies in place, if malicious skimming code does make it onto your site, it won’t be able to access any sensitive information, and alerts will let you know when a vendor’s code has been exploited. Magecart doesn’t need to destroy your brand. Web skimming attacks can be difficult to detect because they don’t attack core application infrastructure — they focus on the browser where protections are not in place. As such, many brands are confused about how to protect their customers. However, implementing a zero-trust approach, thinking critically about how many third-party tags your website really needs and limiting who is able to run code will help you keep your customer data safe. Author bio Peter is the VP of Technology at Instart. Previously, Peter was with Citrix, where he was senior director of product management and marketing for the XenClient product. Prior to that, he held a variety of pre-sales, web development, and IT admin roles, including five years at Marimba working with enterprise change management systems. Peter has a BA in Political Science with a minor in Computer Science from UCSD. British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach. A universal bypass tricks Cylance AI antivirus into accepting all top 10 Malware. An IoT worm Silex, developed by a 14 year old resulted in malware attack and took down 2000 devices  
Read more
  • 0
  • 0
  • 4716

article-image-following-capital-one-data-breach-github-gets-sued-and-aws-security-questioned-by-a-u-s-senator
Savia Lobo
07 Aug 2019
5 min read
Save for later

Following Capital One data breach, GitHub gets sued and AWS security questioned by a U.S. Senator

Savia Lobo
07 Aug 2019
5 min read
Last week, Capital One revealed it was subject to a major data breach due to a configuration vulnerability in its firewall to access its Amazon S3 database, affecting 106 million users in the US and Canada. A week after the breach, not only Capital One, but GitHub and Amazon are also facing scrutiny for their inadvertent role in the breach. Capital One and GitHub sued in California Last week, the law firm Tycko & Zavareei LLP filed a lawsuit in California's federal district court on behalf of their plaintiffs Seth Zielicke and Aimee Aballo. Both plaintiffs claim Capital One and GitHub were unable to protect user’s personal data. The complaint highlighted that Paige A. Thompson, the alleged hacker stole the data in March, posted about the theft on GitHub in April. According to the lawsuit, “As a result of GitHub’s failure to monitor, remove, or otherwise recognize and act upon obviously-hacked data that was displayed, disclosed, and used on or by GitHub and its website, the Personal Information sat on GitHub.com for nearly three months.” The law firm also alleged that with the help of computer logs, Capital One should have known about the data breach when the information was first stolen in March. They “criticized Capital One for not taking action to respond to the breach until last month,” The Hill reports. The lawsuit also alleges that GitHub “encourages (at least) friendly hacking." "GitHub had an obligation, under California law, to keep off (or to remove from) its site Social Security numbers and other Personal Information," the lawsuit further mentions. According to Newsweek, GitHub also violated the federal Wiretap Act, "which permits civil recovery for those whose 'wire, oral, or electronic communication' has been 'intercepted, disclosed, or intentionally used' in violation of, inter alia, the Wiretap Act." A GitHub spokesperson told Newsweek, "GitHub promptly investigates content, once it's reported to us, and removes anything that violates our Terms of Service." "The file posted on GitHub in this incident did not contain any Social Security numbers, bank account information, or any other reportedly stolen personal information. We received a request from Capital One to remove content containing information about the methods used to steal the data, which we took down promptly after receiving their request," the spokesperson further added. On 30th July, New York Attorney General, Letitia James also announced that her office is opening an investigation into the Capital One data breach. “My office will begin an immediate investigation into Capital One’s breach, and will work to ensure that New Yorkers who were victims of this breach are provided relief. We cannot allow hacks of this nature to become every day occurrences,” James said in a statement. Many are confused about why a lawsuit was filed against GitHub as they believe that GitHub is not at fault. Tony Webster, a journalist, and a public records researcher tweeted, “I genuinely can't tell if this lawsuit is incompetence or malice. GitHub owed no duty to CapitalOne customers. This would be like suing a burglar's landlord because they didn't detect and stop their tenant from selling your stolen TV from their apartment.” https://twitter.com/rickhholland/status/1157658909563379713 https://twitter.com/NSQE/status/1157479467805057024 https://twitter.com/xxdesmus/status/1157679112699277312 A user on HackerNews writes, “This is incredible: they're suggesting that, in the same way that YouTube has content moderators, GitHub should moderate every repository that has a 9-digit sequence. They also say that GitHub "promotes hacking" without any nuance regarding modern usage of the word, and they claim that GitHub had a "duty" to put processes in place to monitor submitted content, and that by not having such processes they were in violation of their own terms of service. I hope that this gets thrown out. If not, it could have severe consequences for any site hosting user-generated content.” Read the lawsuit to know more about this news in detail. U.S. Senator’s letter to Amazon CEO raises questions on the security of AWS products Yesterday, Senator Ron Wyden wrote to Amazon’s CEO, Jeff Bezos “requesting details about the security of Amazon’s cloud service”, the Wall Street Journal reports. The letter has put forth questions to understand how the configuration error occurs and what measures is Amazon taking to protect its customers. The Journal reported, “more than 800 Amazon users were found vulnerable to a similar configuration error, according to a partial scan of cloud users, conducted in February by a security researcher.” According to the Senator’s letter, “When a major corporation loses data on a hundred million Americans because of a configuration error, attention naturally focuses on that corporation’s cybersecurity practices.” “However, if several organizations all make similar configuration errors, it is time to ask whether the underlying technology needs to be made safer and whether the company that makes it shares responsibility for the breaches,” the letter further mentions. Jeff Bezos has been asked to reply to these questions by August 13, 2019. “Amazon has said that its cloud products weren’t the cause of the breach and that it provides tools to alert customers when data is being improperly accessed,” WSJ reports. Capital One did not comment on this news. Read the complete letter to know more in detail. U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches Facebook fails to fend off a lawsuit over data breach of nearly 30 million users Equifax breach victims may not even get the promised $125; FTC urges them to opt for 10-year free credit monitoring services
Read more
  • 0
  • 0
  • 3670
article-image-facebook-research-suggests-chatbots-and-conversational-ai-will-empathize-humans
Fatema Patrawala
06 Aug 2019
6 min read
Save for later

Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans

Fatema Patrawala
06 Aug 2019
6 min read
Last week, the Facebook AI research team published a progress report on dialogue research that is fundamentally building more engageable and personalized AI systems. According to the team, “Dialogue research is a crucial component of building the next generation of intelligent agents. While there’s been progress with chatbots in single-domain dialogue, agents today are far from capable of carrying an open-domain conversation across a multitude of topics. Agents that can chat with humans in the way that people talk to each other will be easier and more enjoyable to use in our day-to-day lives — going beyond simple tasks like playing a song or booking an appointment.” In their blog post, they have described new open source data sets, algorithms, and models that improve five common weaknesses of open-domain chatbots today. The weaknesses identified are maintaining consistency, specificity, empathy, knowledgeability, and multimodal understanding. Let us look at each one in detail: Dataset called Dialogue NLI introduced for maintaining consistency Inconsistencies are a common issue for chatbots partly because most models lack explicit long-term memory and semantic understanding. Facebook team in collaboration with their colleagues at NYU, developed a new way of framing consistency of dialogue agents as natural language inference (NLI) and created a new NLI data set called Dialogue NLI, used to improve and evaluate the consistency of dialogue models. The team showcased an example in the Dialogue NLI model, where in they considered two utterances in a dialogue as the premise and hypothesis, respectively. Each pair was labeled to indicate whether the premise entails, contradicts, or is neutral with respect to the hypothesis. Training an NLI model on this data set and using it to rerank the model’s responses to entail previous dialogues — or maintain consistency with them — improved the overall consistency of the dialogue agent. Across these tests they say they saw 3x lesser contradictions in the sentences. Several conversational attributes were studied to balance specificity As per the team, generative dialogue models frequently default to generic, safe responses, like “I don’t know” to some query which needs specific responses. Hence, the Facebook team in collaboration with Stanford’s AI researcher Abigail See, studied how to fix this by controlling several conversational attributes, like the level of specificity. In one experiment, they conditioned a bot on character information and asked “What do you do for a living?” A typical chatbot responds with the generic statement “I’m a construction worker.” With control methods, the chatbots proposed more specific and engaging responses, like “I build antique homes and refurbish houses." In addition to specificity, the team mentioned, “that balancing question-asking and answering and controlling how repetitive our models are make significant differences. The better the overall conversation flow, the more engaging and personable the chatbots and dialogue agents of the future will be.” Chatbot’s ability to display empathy while responding was measured The team worked with researchers from the University of Washington to introduce the first benchmark task of human-written empathetic dialogues centered on specific emotional labels to measure a chatbot’s ability to display empathy. In addition to improving on automatic metrics, the team showed that using this data for both fine-tuning and as retrieval candidates leads to responses that are evaluated by humans as more empathetic, with an average improvement of 0.95 points (on a 1-to-5 scale) across three different retrieval and generative models. The next challenge for the team is that empathy-focused models should perform well in complex dialogue situations, where agents may require balancing empathy with staying on topic or providing information. Wikipedia dataset used to make dialogue models more knowledgeable The research team has improved dialogue models’ capability of demonstrating knowledge by collecting a data set with conversations from Wikipedia, and creating new model architectures that retrieve knowledge, read it, and condition responses on it. This generative model has yielded the most pronounced improvement and it is rated by humans as 26% more engaging than their knowledgeless counterparts. To engage with images, personality based captions were used To engage with humans, agents should not only comprehend dialogue but also understand images. In this research, the team focused on image captioning that is engaging for humans by incorporating personality. They collected a data set of human comments grounded in images, and trained models capable of discussing images with given personalities, which makes the system interesting for humans to talk to. 64% humans preferred these personality-based captions over traditional captions. To build strong models, the team considered both retrieval and generative variants, and leveraged modules from both the vision and language domains. They defined a powerful retrieval architecture, named TransResNet, that works by projecting the image, personality, and caption in the same space using image, personality, and text encoders. The team showed that their system was able to produce captions that are close to matching human performance in terms of engagement and relevance. And annotators preferred their retrieval model’s captions over captions written by people 49.5% of the time. Apart from this, Facebook team has released a new data collection and model evaluation tool, a Messenger-based Chatbot game called Beat the Bot, that allows people to interact directly with bots and other humans in real time, creating rich examples to help train models. To conclude, the Facebook AI team mentions, “Our research has shown that it is possible to train models to improve on some of the most common weaknesses of chatbots today. Over time, we’ll work toward bringing these subtasks together into one unified intelligent agent by narrowing and eventually closing the gap with human performance. In the future, intelligent chatbots will be capable of open-domain dialogue in a way that’s personable, consistent, empathetic, and engaging.” On Hacker News, this research has gained positive and negative reviews. Some of them discuss that if AI will converse like humans, it will do a lot of bad. While other users say that this is an impressive improvement in the field of conversational AI. A user comment reads, “I gotta say, when AI is able to converse like humans, a lot of bad stuff will happen. People are so used to the other conversation partner having self-interest, empathy, being reasonable. When enough bots all have a “swarm” program to move conversations in a particular direction, they will overwhelm any public conversation. Moreover, in individual conversations, you won’t be able to trust anything anyone says or negotiates. Just like playing chess or poker online now. And with deepfakes, you won’t be able to trust audio or video either. The ultimate shock will come when software can render deepfakes in realtime to carry on a conversation, as your friend but not. As a politician who “said crazy stuff” but really didn’t, but it’s in the realm of believability. I would give it about 20 years until it all goes to shit. If you thought fake news was bad, realtime deepfakes and AI conversations with “friends” will be worse.  Scroll Snapping and other cool CSS features come to Firefox 68 Google Chrome to simplify URLs by hiding special-case subdomains Lyft releases an autonomous driving dataset “Level 5” and sponsors research competition
Read more
  • 0
  • 0
  • 3260

article-image-cloudflare-terminates-services-to-8chan-following-yet-another-set-of-mass-shootings-in-the-us-tech-awakening-or-liability-avoidance
Sugandha Lahoti
06 Aug 2019
9 min read
Save for later

Cloudflare terminates services to 8chan following yet another set of mass shootings in the US. Tech awakening or liability avoidance?

Sugandha Lahoti
06 Aug 2019
9 min read
Update: Jim Watkins, the owner of 8chan has spoken against the ongoing backlash in a defensive video statement on uploaded 6th August on YouTube. "My company takes a firm stand in helping law enforcement and within minutes of these two tragedies, we were working with FBI agents to find out what information we could to help in their investigations. There are about 1 million users of 8chan. 8chan is an empty piece of paper for writing on it is disturbing to me that it can be so easily shut down. Over the weekend the domain name service for 8chan was abruptly terminated by the provider Cloudflare.", he states in the video. He adds, "First of all the El Paso shooter posted on Instagram, not 8chan. Later someone uploaded a manifesto; however, that manifesto was not uploaded by the Walmart shooter. It is unfortunate that this place of free speech has temporarily been removed we are working to restore service. It is clearly a political move to remove 8chan from CloudFlare; it has dispersed a peacefully assembled group of people. " Watkins went on to call Cloudflare's decision 'cowardly'. He said, "Contrary to the unfounded claim by Mr. Prince of CloudFlare 8-chan is a lawful community abiding by the laws of the United States and enforced in the Ninth Circuit Court. His accusation has caused me tremendous damage. In the meantime, I wish his company the best and hold no animosity towards him or his cowardly and not thought-out actions against 8-chan." Saturday witnessed two horrific mass shooting tragedies, one when a maniac gunman shot at least 20 people at a sprawling Walmart shopping complex in El Paso, Texas. The other in Dayton, Ohio at the entrance of Ned Peppers Bar where ten people were killed, including the perpetrator, and at least 27 others were injured. The gunman in the El Paso shooting has been identified as Patrick Crusius according to CNN sources. He appears to have been inspired by the online forum known as 8chan. 8chan is an online message board which is home to online extremists who share racist and anti-Semitic conspiracy theories. According to police officials, a four-page document was posted to 8chan, 20 minutes before the shootings that they believe was written by Crusius. The post said, "I'm probably going to die today." His post blamed white nationalists and immigrants for taking away jobs and spewed racist hatred towards immigrants and Hispanics. The El Paso post is not the only incident. 8chan has been filled with unmoderated violent and extremist content over time. Nearly the same thing happened on 8chan before the terror attack in Christchurch, New Zealand. In his post, the El Paso shooter referenced the Christchurch incident saying he was inspired by the Christchurch content on 8chan which glorified the previous massacre. The suspected killer in the synagogue shootings in Poway, California also posted a hate-filled “open letter” on 8chan. In March, this year Australian telecom company Telstra denied access to millions of Australians to the websites 4chan, 8chan, Zero Hedge, and LiveLeak as a reaction to the Christchurch mosque shootings. Cloudflare first defends 8chan citing ‘moral obligations’ but later cuts all ties Post this disclosure, Cloudflare, that provides internet infrastructure services to 8chan continued to defend hosting 8chan calling it their 'moral obligation' to provide 8chan their services. Keeping 8chan within its network is a “moral obligation”, said Cloudflare, adding: “We, as well as all tech companies, have an obligation to think about how we solve real problems of real human suffering and death. What happened in El Paso today is abhorrent in every possible way, and it’s ugly, and I hate that there’s any association between us and that … For us, the question is which is the worse evil? Is the worse evil that we kick the can down the road and don’t take responsibility? Or do we get on the phone with people like you and say we need to own up to the fact that the internet is home to many amazing things and many terrible things and we have an absolute moral obligation to deal with that.” https://twitter.com/slpng_giants/status/1158214314198745088 https://twitter.com/iocat/status/1158218861658791937 Cloudflare has been under the spotlight over the past few years for continuing to work with websites that foster hate. Previous to 8chan, in 2017, Cloudflare had to discontinue services to neo-Nazi blog, The Daily Stormer, after the terror at Charlottevelle. However, Daily Stormer continues to run today having moved to a different infrastructure service with allegedly more readers than ever. After an intense public and media backlash over the weekend, Cloudflare announced that it would completely stop providing support for 8chan. Cloudflare is also readying for an initial public offering in September which may have been the reason why they cut ties with 8chan. In a blog post today, they explained the decision to cut off 8chan. "We just sent notice that we are terminating 8chan as a customer effective at midnight tonight Pacific Time. The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths." Cloudflare has also cut off 8chan's access to its DDOS protection service. Although, this will have a short term impact; 8chan can always come up with another cloud partner and resume operations. Cloudflare acknowledges it as well, “While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online. It does nothing to address why mass shootings occur. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we've solved our own problem, but we haven't solved the Internet’s.” The company added, “We feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often,” adding that this is not “due to some conception of the United States’ First Amendment,” since Cloudflare is a private company (and most of its customers, and more than half of its revenue, are outside the United States). Instead, Cloudflare “will continue to engage with lawmakers around the world as they set the boundaries of what is acceptable in those countries through due process of law. And we will comply with those boundaries when and where they are set.” Founder of 8chan wants the site to be shut off 8chan founder Fredrick Brennan also appreciated Cloudfare’s decision to block the site. Post the gruesome El Paso shootings, he also told the Washington Post that the site’s owners should “do the world a favor and shut it off.” However, he told Buzzfeed News, shutting down 8chan wouldn't stop the extremism we're now seeing entirely, but it would make it harder for them to organize. https://twitter.com/HW_BEAT_THAT/status/1158194175755485191 In a March interview with The Wall Street Journal, he expressed his regrets over his role in the site’s creation and warned that the violent culture that had taken root on 8chan’s boards could lead to more mass shootings. Brennan founded the site in 2011 and announced his departure from the company in July 2016. 8Chan is owned by Jim Watkins and run by his son, Ron. He posted on Twitter that 8chan will be moving to another service ASAP. He has also resisted calls to moderate or shut down the site. On Sunday, a banner at the top of 8chan’s home page read, “Welcome to 8chan, the Darkest Reaches of the Internet.” https://twitter.com/CodeMonkeyZ/status/1158202303096094720 Cloudflare acted too late, too little Cloudflare's decision to simply block 8chan was not seen as an adequate response by some who say Cloudflare should have acted earlier. 8chan has been known for enabling child pornography in 2015 and as a result, was removed from Google Search. Coupled with the Christchurch mosque and the Poway synagogue shootings earlier in the year, there was increased pressure on those providing 8chan's Internet and financial service infrastructures to terminate their support. https://twitter.com/BinaryVixen899/status/1158216197705359360 Laurie Voss, the cofounder of npmjs, called out Cloudflare and subsequently, other content sites (Facebook, Twitter) for shirking responsibility under the guise of them being infrastructure companies and therefore cannot enforce content standards. https://twitter.com/seldo/status/1158204950595420160 https://twitter.com/seldo/status/1158206331662323712 “Facebook, Twitter, Cloudflare, and others pretend that they can't. They can. They just don't want to.” https://twitter.com/seldo/status/1158206867438522374 “I am super, super tired of companies whose profits rely on providing maximum communication with minimum moderation pretending this is some immutable law and not just the business model they picked,” he tweeted. Others also agreed that Cloudflare’s statement eschews responsibility. https://twitter.com/beccalew/status/1158196518983045121 https://twitter.com/slpng_giants/status/1158214314198745088 Voxility, 8chan’s hardware provider also bans the site Web services company Voxility has also banned 8chan and it’s new host Epik, which had been leasing web space from it. Epik’s website remains accessible, but 8chan now returns an error message. “As soon as we were notified of the content that Epik was hosting, we made the decision to totally ban them,” Voxility business development VP Maria Sirbu told The Verge. Sirbu said it was unlikely that Voxility would work with Epik again. “This is the second situation we’ve had with the reseller and this is not tolerable,” she said. https://twitter.com/alexstamos/status/1158392795687575554 Does de-platforming even work? De-platforming or banning people that spread extremist or banning these people is not a solution since they will eventually migrate to other platforms and still able to circulate their ideology. Closing 8chan is not the solution to the bigger problem of controlling racism and extremism. Closing one 8chan will sprout another 20chan. “8chan is no longer a refuge for extremist hate — it is a window opening onto a much broader landscape of racism, radicalization, and terrorism. Shutting down the site is unlikely to eradicate this new extremist culture because 8chan is anywhere. Pull the plug, it will appear somewhere else, in whatever locale will host it. Because there's nothing particularly special about 8chan, there are no content algorithms, hosting technology immaterial. The only thing radicalizing 8chan users are other 8chan users.”, Ryan Broderick from Buzzfeed wrote. A group of users told BuzzFeed that it’s now common for large 4chan threads to migrate over into Discord servers before the 404. After Cloudflare, Amazon is beginning to face public scrutiny as 8chan’s operator Jim Watkins sells audiobooks on Amazon.com and Audible. https://twitter.com/slpng_giants/status/1158213239697747968 Facebook will ban white nationalism, and separatism content in addition to white supremacy content. 8 tech companies and 18 governments sign the Christchurch Call to curb online extremism; the US backs off. How social media enabled and amplified the Christchurch terrorist attack
Read more
  • 0
  • 0
  • 2199