It is difficult, if not impossible, to completely prevent web scraping. If you serve the information from the web server, there will be a way to extract the data programmatically somehow. There are only hurdles you can put in the way. It amounts to obfuscation, which you could argue is not worth the effort.
JavaScript makes it more difficult, but not impossible since Selenium can drive real web browsers, and frameworks such as PhantomJS can be used to execute the JavaScript.
Requiring authentication can help limit the amount of scraping done. Rate limiting can also provide some relief. Rate limiting can be done using tools such as iptables or done at the application level, based on the IP address or user session.
Checking the user agent provided by the client is a shallow measure, but can help a bit. Discard requests that come with user agents...