这句话已经流传了两千多年,但对于 21 世纪的人类拥有比以往更重大的意义。因为人类将进入一场巨大的竞赛:一方是独立个体,另一方则是 Google、Facebook、Amazon,甚至是利用大数据技术监视你的政府。一旦 Google 比你自己更了解你,它就可能开始控制甚至操纵你。如果你不想出局,就一定要比 Google 跑得更快。加油!
Yes. µVision can create the batch file for you. Check the Project Options -> Output Tab -> Create Batch File checkbox. Now when you rebuild all target files, µVision creates a batch file containing all the commands necessary to build your project from a DOS box.
On Mac OS X you can install the “GDAL Complete” Framework from kyngchaos.com.
GDAL applications are run through the Mac OS X Terminal. The first time you install the GDAL package there is one additional step to make sure you can access these programs. Open the Terminal application and run the following commands:
You should now be ready to go. To test your installation, run the Terminal command gdalinfo --version. A correct installation will output something like GDAL 1.9.0, released 2011/12/29.
While TileMill’s renderer does support reprojecting raster data sources on-the-fly, this can slow down your map preview and exports significantly. For this reason it is recommended that you ensure the file is warped to the proper projection before importing it into your TileMill project. This can be done with the gdalwarpcommand that comes with the GDAL library.
The projection we need to warp is Google Web Mercator, which can be referenced by the code ‘EPSG:3857’. You will also need to know the original projection of the geotiff you are converting. As an example, we’ll work with the medium-sized ‘Natural Earth II with Shaded Relief and Water’ geotiff available from Natural Earth, which is projected to WGS 84 (aka ‘EPSG:4326’).
In your terminal, navigate to the directory where the geotiff is stored and run the following command. (This is one command split across several lines; you should be able to copy and paste the whole thing at once.)
Let’s go through what each piece of that command means. A full description of the gdalwarp command options can be found in the GDAL documentation.
-s_srs means “source spatial reference system” – this is the projection that the flle you are starting with is stored in, which in the case of Natural Earth is EPSG:4326.
-t_srs means “target spatial reference system” – this is the projection that you want to convert the datasource to. For any raster file you want to use with TileMill this should be EPSG:3857.
-r bilinear is telling the program what resampling interpolation method to use. If you want the command to run faster and don’t mind a rougher-looking output, choose near instead of bilinear. If you don’t mind waiting longer for very high-quality output, choose lanczos.
-te -20037508.34 -20037508.34 20037508.34 20037508.34 is telling the program the desired “target extent” of our output file. This is necessary because the Natural Earth geotiff contains data outside the bounds that the web mercator projection is intended to display. The WGS 84 projection can safely contain data all the way to 90° North & South, while web mercator is really only intended to display data up to about 85.05° North & South. The four big numbers after -te represent the western, southern, eastern and northern limits (respectively) of a web mercator map.
If you are working with raster data of a smaller area you will need to make sure that these numbers are adjusted to reflect the area it represents. If that area that does not go too far north or south, you can safely omit this entire option.
LR_LC_SR_W.tif is our original file, and natural-earth-2-mercator.tif is the name of the new file the program will create.
Depending on the size of your file and the resampling method you choose, gdalwarp can take a few seconds to a few hours to do its job. With the cubic resampling method on the medium Natural Earth will should a few minutes or less.
难道说,记忆也是这样吗?难道说,八九年那段欢欣鼓舞的日子,人们真的已经忘得一干二净?学生们在天安门广场和平抗争的时候,整座城市都受了理想主义精神的感染,欣然漂浮在一片善意与慷慨的海洋。一夜之间,社会风气发生了神奇的变化:人们不再推推搡搡,大喊大叫;街头偶遇的陌生人会用灿烂的微笑相互致意,比划 V 字手势,或是客客气气地讨论各种社会问题。出租司机免费接送学生,小店老板也纷纷向绝食抗议者捐款。我记得,当时有传言说,就连小偷都受了高尚情怀和团结精神的触动,停止了盗窃行为。再往后,政府调遣士兵来加强戒严的时候,成千上万的普通北京市民挺身而出,在枪口下劝阻那些士兵,不让他们执行命令。我曾经亲眼看见,白发苍苍的老太太躺倒在大街上,栏住了政府的装甲运兵车。
It is a complete browser (End-to-End) testing solution which aims to simplify the process of setting up Continuous Integration and writing automated tests. Nightwatch can also be used for writing Node.js unit tests.
Nightwatch got its name from the famous painting The Night Watch by Dutch artist Rembrandt van Rijn. The masterpiece is prominently displayed in the Rijksmuseum, in Amsterdam – The Netherlands.
Overview of WebDriver
WebDriver is a general purpose library for automating web browsers. It was started as part of the Selenium project, which is a very popular and comprehensive set of tools for browser automation, initially written for Java but now with support for most programming languages.
Nightwatch uses the WebDriver API to perform the browser automation related tasks, like opening windows and clicking links for instance.
WebDriver is now a W3C specification, which aims to standardize browser automation. WebDriver is a remote control interface that enables introspection and control of user agents. It provides a platform and a restful HTTP api as a way for web browsers to be remotely controlled.
Theory of Operation
Nightwatch works by communicating over a restful HTTP api with a WebDriver server (typically the Selenium server). The restful API protocol is defined by the W3C WebDriver API. See below for an example workflow for browser initialization.
Most of the times, Nightwatch needs to send at least 2 requests to the WebDriver server in order to perform a command or assertion, the first one being the request to locate an element given a CSS selector (or Xpath expression) and the next to perform the actual command/assertion on the given element.
“Node.js is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.”
There are installation packages and instructions for most major Operating systems on its website nodejs.org. Remember to install also the npm tool, which is the node package manager and is distributed with the Node.js installer.
Install Nightwatch
To install the latest version using the npm command line tool, run the following:
$ npminstall[-g] nightwatch
Add -g option to make nightwatch runner available globally in your system.
Selenium Server Setup
The most common WebDriver implementation is the Selenium Server. This allows you to manage multiple browser configurations in one place. However, you can also run the individual browser drivers directly, such as the ChromeDriver, more details are available in the Browser Drivers Setup section.
Selenium Server
Selenium Server is a Java application which Nightwatch uses to connect to the various browsers. It runs separately on the machine with the browser you want to test. You will need to have the Java Development Kit (JDK) installed, minimum required version is 7. You can check this by running java -version from the command line.
Download Selenium
Download the latest version of the selenium-server-standalone-{VERSION}.jar file from the Selenium downloads page and place it on the computer with the browser you want to test. In most cases this will be on your local machine and typically inside your project’s source folder.
A good practice is to create a separate subfolder (e.g. bin) and place it there as you might have to download other driver binaries if you want to test multiple browsers.
Running Selenium Automatically
If the server is on the same machine where Nightwatch is running, it can be started/stopped directly by the Nightwatch Test Runner.
Running Selenium Manually
To run the Selenium Server manually, from the directory with the jar run the following:
Starting with Selenium 3, FirefoxDriver is no longer included in the package. Also, starting with version 48, Firefox is no longer compatible with FirefoxDriver which is shipped with Selenium 2.x. Firefox users are advised to use GeckoDriver for their testing. For more info, refer to the browser setup section.
The test runner expects a configuration file to be passed, using by default a nightwatch.json file from the current directory, if present. A nightwatch.conf.js file will also be loaded by default, if found.
Let’s create the nightwatch.json in the project’s root folder and add this inside:
An array of folders (excluding subfolders) where the tests are located.
output_folder Optional
string
tests_output
The location where the JUnit XML report files will be saved.
custom_commands_path Optional
string|array
none
Location(s) where custom commands will be loaded from.
custom_assertions_path Optional
string|array
none
Location(s) where custom assertions will be loaded from.
page_objects_path Optional
string|array
none
Location(s) where page object files will be loaded from.
globals_path Optional
string
none
Location of an external globals module which will be loaded and made available to the test as a property globals on the main client instance.
Globals can also be defined/overwritten inside a test_settings environment.
selenium Optional
object
An object containing Selenium Server related configuration options. See below for details.
test_settings
object
This object contains all the test related options. See below for details.
live_output Optional
boolean
false
Whether or not to buffer the output in case of parallel running. See below for details.
disable_colors Optional
boolean
false
Controls whether or not to disable coloring of the cli output globally.
parallel_process_delay Optional
integer
10
Specifies the delay(in milliseconds) between starting the child processes when running in parallel mode.
test_workers Optional
boolean|object
false
Whether or not to run individual test files in parallel. If set to true, runs the tests in parallel and determines the number of workers automatically.
If set to an object, can specify specify the number of workers as "auto" or a number.
Below are a number of options for the selenium server process. Nightwatch can start and stop the Selenium process automatically which is very convenient as you don’t have to manage this yourself and focus only on the tests.
If you’d like to enable this, set start_process to true and specify the location of the jar file inside server_path.
Name
type
default
description
start_process
boolean
false
Whether or not to manage the selenium process automatically.
start_session
boolean
true
Whether or not to automatically start the Selenium session. This will typically be set to false when running unit/integration tests that don’t interact with the Selenium server.
server_path
string
none
The location of the selenium jar file. This needs to be specified if start_process is enabled.
E.g.: bin/selenium-server-standalone-2.43.0.jar
log_path
string|boolean
none
The location where the selenium output.log file will be placed. Defaults to current directory.
To disable Selenium logging, set this to false
port
integer
4444
The port number Selenium will listen on.
cli_args
object
none
List of cli arguments to be passed to the Selenium process. Here you can set various options for browser drivers, such as:
webdriver.firefox.profile: Selenium will be default create a new Firefox profile for each session. If you wish to use an existing Firefox profile you can specify its name here.
Complete list of Firefox Driver arguments available here.
webdriver.chrome.driver: Nightwatch can run the tests using Chrome browser also. To enable this you have to download the ChromeDriver binary and specify it’s location here. Also don’t forget to specify chrome as the browser name in the desiredCapabilities object.
More information can be found on the ChromeDriver website.
webdriver.ie.driver: Nightwatch has support for Internet Explorer also. To enable this you have to download the IE Driver binary and specify it’s location here. Also don’t forget to specify “internet explorer” as the browser name in the desiredCapabilities object.
Test settings
Below are a number of settings that will be passed to the Nightwatch instance. You can define multiple sections (environments) of test settings so you could overwrite specific values per environment.
A “default” environment is required. All the other environments are inheriting from default and can overwrite settings as needed.
{...
"test_settings":{"default":{"launch_url":"http://localhost",
"globals":{"myGlobalVar":"some value",
"otherGlobal":"some other value"}},
"integration":{"launch_url":"http://staging.host",
"globals":{"myGlobalVar":"other value"}}}}
The key of the settings group can be passed then to the runner as the --env argument to use the specified settings, like so:
$ nightwatch --env integration
This can be useful if you need to have different settings for your local machine and the Continuous Integration server.
The launch_url property
This property will be made available to the main Nightwatch api which is used in the tests. Its value depends on which environment is used.
If you run your tests as in the example above (with --env integration) launch_url will be set to http://staging.host, as per the configuration. Otherwise it will have the value defined in the defaultenvironment (i.e. http://localhost).
A very useful concept that Nightwatch provides is test globals. In its most simple form, this is a dictionary of name-value pairs which is defined in your nightwatch.json configuration file. Like the launch_url property, this is made available directly on the Nightwatch api which is passed to the tests. It is also dependent on the environment used, having the ability to overwrite specific globals per environment.
If we still pass the --env integration option to the runner, then our globals object will look like below:
By default, a deep object copy will be created for each test suite run. If you’d like to maintain the same object throughout the entire tests run, set the persist_globals option to true, as detailed below.
Full list of settings
Name
type
default
description
launch_url
string
none
A url which can be used later in the tests as the main url to load. Can be useful if your tests will run on different environments, each one with a different url.
selenium_host
string
localhost
The hostname/IP on which the selenium server is accepting connections.
selenium_port
integer
4444
The port number on which the selenium server is accepting connections.
request_timeout_options since v0.9.11
object
60000 / 0
Defines the number of milliseconds an HTTP request to the Selenium server will be kept open before a timeout is reached. After a timeout, the request can be automatically retried a specified number of times, defined by the retry_attempts property.
This options has been deprecated in favor of the cli_args object on the selenium settings object.
chrome_driver deprecated
string
none
This options has been deprecated in favor of the cli_args object on the selenium settings object.
ie_driver deprecated
string
none
This options has been deprecated in favor of the cli_args object on the selenium settings object.
screenshots
object
none
Selenium generates screenshots when command errors occur. With on_failure set to true, also generates screenshots for failing or erroring tests. These are saved on the disk.
Since v0.7.5 you can disable screenshots for command errors by setting "on_error" to false.
An object which will be passed to the Selenium WebDriver when a new session will be created. You can specify browser name for instance along with other capabilities.
Example:
"desiredCapabilities" : {
"browserName" : "firefox",
"acceptSslCerts" : true
}
You can view the complete list of capabilities here.
globals
object
An object which will be made available within the test and can be overwritten per environment. Example:
"globals" : {
"myGlobal" : "some_global"
}
exclude
array
An array of folders or file patterns to be skipped (relative to the main source folder).
Example:
Folder or file pattern to be used when loading the tests. Files that don’t match this pattern will be ignored.
Example:
"filter" : "tests/*-smoke.js"
log_screenshot_data
boolean
false
Do not show the Base64 image data in the (verbose) log when taking screenshots.
use_xpath
boolean
false
Use xpath as the default locator strategy
cli_args
object
none
Same as Selenium settings cli_args. You can override the global cli_args on a per-environment basis.
end_session_on_fail
boolean
true
End the session automatically when the test is being terminated, usually after a failed assertion.
skip_testcases_on_fail
boolean
true
Skip the remaining testcases (or test steps) from the same test suite (i.e. test file), when one testcase fails.
output_folder since v0.8.18
string|boolean
Define the location where the JUnit XML report files will be saved. This will overwrite any value defined in the Basic Settings section. If you’d like to disable the reports completely inside a specific environment, set this to false.
persist_globals since v0.8.18
boolean
false
Set this to true if you’d like to persist the same globals object between testsuite runs or have a (deep) copy of it per each testsuite.
compatible_testcase_support since v0.9.0
boolean
false
Applies to unit tests. When set to true this allows for tests to be written in the standard Exports interface which is interchangeable with the Mocha framework. Prior unit tests interface support is deprecated and this will become the default in future releases.
detailed_output since v0.9.0
boolean
true
By default detailed assertion output is displayed while the test is running. Set this to false if you’d like to only see the test case name displayed and pass/fail status. This is especially useful when running tests in parallel.
Browser Drivers Setup
This section contains guides for getting started with most of the major browsers and setup instructions on how to configure the individual webdriver implementations to work with Nightwatch.
The individual drivers described here are usually standalone applications which are used to interact with the browsers via the WebDriver HTTP API. You can run them either directly, or through the Selenium Server.
GeckoDriver
Overview
GeckoDriver is a standalone application used to interact with Gecko-based browsers, such as Firefox. It is written in Rust and maintained by Mozilla.
Starting with Firefox 48, GeckoDriver is the only way to automate Firefox, the legacy FirefoxDriver which used to be part of Selenium is no longer supported. Internally it translates the HTTP calls into Marionette, Mozilla’s automation protocol built into Firefox.
Download
Binaries are available for download on the GeckoDriver Releases page on GitHub, for various platforms.
Selenium 2.x users are advised to use version v0.9, whereas Selenium 3 users should use the latest version.
Usage
If you’re using GeckoDriver through Selenium Server, simply set the cli argument "webdriver.gecko.driver" to point to the location of the binary file. E.g.:
$ ./bin/geckodriver-0.10 -help
geckodriver 0.10.0
USAGE:
geckodriver-0.10 [FLAGS] [OPTIONS]
FLAGS:
--connect-existing Connect to an existing Firefox instance
-h, --help Prints help information
--no-e10s Start Firefox without multiprocess support (e10s) enabled
-V, --version Prints version information
-v Set the level of verbosity. Pass once for debug level logging and twice for trace level logging
OPTIONS:
-b, --binary Path to the Firefox binary, if no binary capability provided
--log Set Gecko log level [values: fatal, error, warn, info, config, debug, trace]
--marionette-port Port to use to connect to gecko (default: random free port)
--host Host ip to use for WebDriver server (default: 127.0.0.1)
-p, --port Port to use for WebDriver server (default: 4444)
Specifying the firefox profile can be done by setting the profile property in the firefoxOptions dictionary, as detailed above. This can be the base64-encoded zip of a profile directory and it may be used to install extensions or custom certificates.
Implementation Status
GeckoDriver is not yet feature complete, which means it does not yet offer full conformance with the WebDriver standard or complete compatibility with Selenium. Implementation status can be tracked on the Marionette MDN page.
ChromeDriver
Overview
ChromeDriver is a standalone server which implements the W3C WebDriver wire protocol for Chromium. ChromeDriver is available for Chrome on Android and Chrome on Desktop (Mac, Linux, Windows and ChromeOS).
Download
Binaries are available for download on the ChromeDriver Downloads page, for various platforms.
Selenium Server Usage
If you’re using ChromeDriver through Selenium Server, simply set the cli argument "webdriver.chrome.driver" to point to the location of the binary file. E.g.:
If you’re only running your tests against Chrome, running ChromeDriver standalone is easier and slightly faster. Also there is no dependency on Java.
This requires a bit more configuration:
1) First, disable Selenium Server, if applicable:
{"selenium":{"start_process":false}}
2) Configure the port and default path prefix.
ChromeDriver runs by default on port 9515. We also need to clear the default_path_prefix, as it is set by default to /wd/hub, which is what selenium is using.
The easiest way to manage the ChromeDriver process is by using the chromedriverNPM package, which is a third-party wrapper against the binary. This will abstract the downloading of the chromedriver binary and will make it easy to manage starting and stopping of the process.
You can add this to your external globals file, like so:
var chromedriver =require('chromedriver');
module.exports ={
before :function(done){
chromedriver.start();done();},
after :function(done){
chromedriver.stop();done();}};
Using a fixed ChromeDriver version
In some situations you may need to use a specific version of ChromeDriver. For instance, the CI server runs an older version of Chrome. Then you will need an older version of ChromeDriver.
Here’s what your globals file might look like:
var chromedriver =require('chromedriver');var path =require('path');var driverInstanceCI;functionisRunningInCI(){returnthis.test_settings.globals.integration;}functionstartChromeDriver(){if(isRunningInCI.call(this)){var location = path.join(__dirname,'../bin/chromedriver-linux64-2.17');
driverInstanceCI =require('child_process').execFile(location,[]);return;}
chromedriver.start();}functionstopChromeDriver(){if(isRunningInCI.call(this)){
driverInstanceCI && driverInstanceCI.kill();return;}
chromedriver.stop();}
module.exports ={'ci-server':{
integration :true},
before :function(done){
startChromeDriver.call(this);done();},
after :function(done){
stopChromeDriver.call(this);done();}};
Run your tests then with (on the CI server):
$ ./node_modules/.bin/nightwatch --env ci-server
ChromeOptions
You can specify Chrome options or switches using the chromeOptions dictionary, under the desiredCapabilities. Refer to the ChromeDriver website for a fill list of supported capabilities and options.
Command line usage
$ ./bin/chromedriver -h
Usage: ./bin/chromedriver [OPTIONS]
Options
--port=PORT port to listen on
--adb-port=PORT adb server port
--log-path=FILE write server log to file instead of stderr, increases log level to INFO
--verbose log verbosely
--version print the version number and exit
--silent log nothing
--url-base base URL path prefix for commands, e.g. wd/url
--port-server address of server to contact for reserving a port
--whitelisted-ips comma-separated whitelist of remote IPv4 addresses which are allowed to connect to ChromeDriver
Microsoft WebDriver
Overview
Microsoft WebDriver is a standalone server which implements the W3C WebDriver wire protocol for the Edge browser. It is supported by Windows 10 and onwards.
If you’re using Microsoft WebDriver through Selenium Server, simply set the cli argument "webdriver.edge.driver"to point to the location of the binary file. E.g.:
If you’re only running your tests against Edge, running the EdgeDriver standalone can be slightly faster. Also there is no dependency on Java.
This requires a bit more configuration and you will need to start/stop the EdgeDriver:
1) First, disable Selenium Server, if applicable:
{"selenium":{"start_process":false}}
2) Configure the port and default path prefix.
Microsoft WebDriver runs by default on port 9515. We also need to clear the default_path_prefix, as it is set by default to /wd/hub, which is what selenium is using.
EdgeDriver is not yet feature complete, which means it does not yet offer full conformance with the WebDriver standard or complete compatibility with Selenium. Implementation status can be tracked on the Microsoft WebDriver homepage.
我来到澳大利亚以后第一次向别人介绍自己的经历就让我有点猝不及防。开学之前,学校为了帮助宿舍区的新生们(其中超过90%都是本科生)相互了解,组织了一个“罗马长袍聚会”(Roman toga party),参加者把床单披在身上模仿古代罗马人。我也兴冲冲地参加了。聚会上一个本地的白人女生问我来自哪个国家,我说我来自中国。她马上转头对身边的伙伴说:“他们中国人为什么说‘中国’(China)这个词的时候总是声音特别大?哈哈!”然后她开始一遍一遍地很大声的模仿我的口音:“China! China!”(中国!中国!)。
后来我逐渐发现,我的遭遇并不算极端案例。今年7月下旬,墨尔本大学(University of Melbourne)和同样位于这座城市的莫纳什大学(Monash University)校园里的一些建筑物的入口处竟然被亲纳粹的白人青年团体张贴了写有“禁止中国人入内”的海报。此事在中国留学生群体中引发了不小的反响。最近,澳大利亚官方对于来自中国日益增长的投资、移民,以及与其相伴的政治和意识形态影响力越来越焦虑。甚至澳大利亚的教育家指责中国留学生中有很多正把中国官方的立场带入教室,以及澳大利亚情报部门开始调查某些进行过大额政治捐赠的华人富商是否是中国政府的代言人。今年上半年,澳大利亚政府提出一项提案,希望修改现行的“反种族歧视法案”,因为他们认为该法案禁止种族歧视言论的规定妨碍了言论自由。但是,在这一议题的激发下,澳洲少数族群纷纷在社交媒体上分享自己遭受种族歧视的经历。最终,该提案被澳大利亚参议院否决。上述一系列事件至少证明现在的澳大利亚主流白人群体与少数族群之间逐渐显现出一种张力。
Node.JS Top 10 Articles for the Past Month (v.Feb 2017)
For Jan-Feb 2017, we’ve ranked nearly 1,000 Node.JS articles to pick the Top 10 stories (1% chance) that can help advance your career.
Topics included in this list are: Best Practices, Home Automation, Notification, Interview Q/A, Docker, NASA, APIs, Microservice, Digital Ocean. The list for JavaScript, React and Angular are posted separately.
Mybridge AI ranks articles based on the quality of content measured by our machine and a variety of human factors including engagement and popularity. This is a competitive list and you’ll find the experience and techniques shared by Node.JS leaders particularly useful.
Machine Learning Top 10 Articles for the Past Year (v.2017)
For the past year, we’ve ranked nearly 14,500 Machine Learning articles to pick the Top 10 stories (0.069% chance) that can help you advance your career in 2017.
“It was machine learning that enabled AlphaGo to whip itself into world-champion-beating shape by playing against itself millions of times” — Demis Hassabis, Founder of DeepMind
AlphaGo astonishes Go grandmaster Lee Sedol with its winning move
This machine learning list includes topics such as: Deep Learning, A.I., Natural Language Processing, Face Recognition, Tensorflow, Reinforcement Learning, Neural Networks, AlphaGo, Self-Driving Car.
This is an extremely competitive list and Mybridge has not been solicited to promote any publishers. Mybridge A.I. ranks articles based on the quality of content measured by our machine and a variety of human factors including engagement and popularity. Academic papers were not considered in this batch.
Give a plenty of time to read all of the articles you’ve missed this year. You’ll find the experience and techniques shared by the leading data scientists particularly useful.
GeoTrellis, a geographic data processing engine for high performance applications, is a Scala library and framework that uses Spark to work with raster data. GeoTrellis 1.0 was recently released under LocationTech, marking a major achievement for the community that has helped to build the project.
A 1.0 release is a significant milestone for an Open Source Project. It’s an indicator of maturity and reliability. GeoTrellis became an Open Source Project in 2011 with the goal of helping people process raster data at scale. We moved to Apache Spark for supporting distributed processing with version 0.10.0 in April of this year. We have come a long way.
This post will explain the motivation to release under LocationTech and what the decision means for GeoTrellis users and contributors.
LocationTech is a working group hosted by the Eclipse foundation with a charter to foster community around commercial-friendly, open source, advanced geospatial technology. GeoTrellis joined LocationTech in 2013. Here are some of the reasons why:
Access to legal support to ensure clarity for questions related to intellectual property
Commitment to a level of quality people can expect from graduated LocationTech projects
Contribution to governance of the Open Source Big Data Geospatial community
Expansion of the GeoTrellis community beyond Azavea
GeoTrellis developers have already benefited from collaboration with other LocationTechn projects since joining in 2013. An example of this is when developers from GeoMesa and GeoTrellis worked together to create the SFCurve library. It’s a solution to the common problem of creating Z-order curve indices based on spatial or spatiotemporal properties of data. Additionally, members from GeoTrellis have participated and presented at the annual LocationTech tour, which has become a global event promoting open source geospatial software.
Impact on Users
There will a number of new features and few inconveniences that come with 1.0. This major release marks our official graduation but only includes minor API breaks with respect to 0.10.3. The release from .09 to 0.10 had many large architectural changes stemming from the transition to Apache Spark which required significant API changes. This is not the case for 1.0. You will need to upgrade your project and change the organization to “org.locationtech” as shown below:
GeoTrellis will still be available on Maven Central via sonatype’s nexus repository in addition to repo.locationtech.org. The last release 0.10.3 was only available on sonatype’s nexus repository.
Major New Features
Streaming GeoTiff support
Windowed GeoTiff reading on S3 and Hadoop
Improved ETL capabilities
HBase and Cassandra backends support
Collections API that allows users to avoid Spark in ideal cases
Experimental support for Vector Tiles, GeoWave integration, and GeoMesa integration
Documentation moved to ReadTheDocs. This greatly improves usability, readability, and searchability
The GeoTrellis team decided that the many benefits to joining LocationTech outweigh any downsides. However, in the name of transparency, it’s important to discuss the possible downsides we considered:
Give GeoTrellis trademark to Eclipse
Ceding some control. There are pros and cons to creating a larger decision-making body for a project. We think the increased number of perspectives will outweigh the possibility of slower decision-making time
One-time requirements to officially graduate involved:
Submitting codebase and dependencies to ensure appropriate licensing
Setup builds that publish to LocationTech infrastructure
Create release review so Project Management Committee can do final review of release
Graduation review to make sure project is up to standards
We are excited about the move and the significance of the achievement. GeoTrellis has grown it’s community and user base over the years. GeoTrellis has been the collective work of more than 50 people and 6,500 commits.
A 1.0 release marks the effort of this community and the maturation of GeoTrellis. Moving forward will see a regular release schedule.
Connect with us
We appreciate hearing about the projects that GeoTrellis supports – please get in touch via Twitter, our mailing list, our Gitter channel, or email to share what you are working on.
GitHub – Issues, codebase, documentation, everything you need
Our mailing list – Stay informed about releases, bug bashes, and GeoTrellis updates
Gitter – Scala is hard. We can help. Come ask questions about your GeoTrellis project
Twitter – We send team members to conferences, workshops, and share Big Data Open Source Geo project news
Email – Have questions about a project idea that could benefit from processing rasters at a scale? Reach out to us via email – we’d love to hear from you!
For the past year, we’ve ranked nearly 8,500 Node.JS articles to pick the Top 10 stories (0.12% chance) that can help you prepare your development career in 2017.
This Node.JS list includes topics such as: Backend, MongoDB, Express, Structure, Test, Passport.
This is an extremely competitive list and Mybridge has not been solicited to promote any publishers. Mybridge AI ranks articles based on the quality of content measured by our machine and a variety of human factors including engagement and popularity. Mybridge AI ranks articles based on parameters including the quality of content, popularity, and other human factors. Hopefully this condensed list will help you read and learn more productively in the area of Node.JS.
Give a plenty of time to read all of the articles you’ve missed this year. You’ll find the experience and lessons shared by Node.JS leaders particularly useful.
Applying Neural Network and Local Laplace Filter Methods to Very High Resolution Satellite Imagery to Detect Damage in Urban Areas
by Dariia Gordiiuk
Since the beginning of the human species, we have been at the whim of Mother Nature. Her awesome power can destroy vast areas and cause chaos for the inhabitants. The use of satellite data to monitor the Earths surface is becoming more and more essential. Of particular importance are the disasters and hurricane monitoring systems that can help people to identify damage in remote areas, measure the consequences of the events, and estimate the overall damage to a given area. From a computing perspective, such an important task needs to be implemented to assist in various situations.
To analyze and estimate the effects of a disaster, we use high-resolution, satellite imagery from an area of interest. This can be obtained from Google Earth. We can also get free OSM vector data that has a detailed ground truth mask of houses. This is the latest vector zip from New York (Figure 1).
Figure 1. NY Buildings Vector Layer
Next, we rasterize (convert from vector to raster) the image using a tool from gdal, called gdal_rasterize. As a result we have acquired a training and testing dataset from Long Island (Figure 2).
Figure 2. Training Data Fragment of CNN
We apply a deep learning framework Caffe for training purposes and the learning model of Convolutional Neural Networks (CNN):
Figure 3. CNN Parameters
The derived neural net enables us to identify the predicted houses from the target area after the event (Figure 4). We can also use data from another similar area which hasn’t been damaged for CNN learning (if we can’t access the data for the desired territory).
Figure 4. Predictive Results of CNN Learning
We work with predicates of buildings using vectorization (extracting a contour and then converting lines to polygons) (Figure 5).
Figure 5. Predictive Results of Buildings (Based on CNN)
Also, we need to compute the intersection of the obtained predicate vector and the original OSM vector (Figure 6). This task can be accomplished by creating a new filter, dividing the square of the predicate buildings by the original OSM vector. Then, we filter the predictive houses by applying a threshold of 10%. This means that if the area of houses in green (Figure 6) is 10% less than the area in red, the real buildings have been destroyed.
Figure 6. Calculating CNN-Obtained Building Number (Green) Among Buildings Before Disaster (Red)
Using the 10%-area threshold we can remove the houses that have been destroyed and get a new map that displays existing buildings (Figure 7). By computing the difference between the pre- and post- disaster masks, we obtain a map of the destroyed buildings (Figure 8).
Figure 7. Buildings: Before and After Disaster With CNN Method
Figure 8. Destroyed Buildings With CNN
We have to remember that the roofs of the houses are represented as flat structures in 2D-images. This is an important feature that can also be used to filter input images. A local Laplace filter is a great tool for classifying flat and rough surfaces (Figure 9). The first image has to be a 4-channel image with the fourth Alpha-channel that describes no-data-value pixels in the input image. The second image (img1) is the same, a 3-channel RGB image.
Figure 9. Local Laplace Window Filter
Applying this tool lets you get the map of the flat surface. Let’s look at the new mask of the buildings which have flat and rough textures (Figure 10)after combining this filter and extracting the vector map.
Figure 10. Flat Surface Mask With Laplace Window Filter Followed By Extracted House Mask
A robust library of the OpenCV computer vision has a denoising filter that helps remove noise from the flat buildings masks (Figure 11, 12).
Figure 11. Denoising Filter
Figure 12. Resulting Mask. Pre- and Post- Disaster Images After Applying Denoising Filter
Next, we apply filters to extract the contours and convert the lines into the polygons. This enables us to get new building recognition results (Figure 13).
Figure 13. Predictive Results of Buildings With Laplace Filter
We compute the area of an intersection vector mask obtained from the filter and a ground truth OSM mask and use a 14% threshold to reduce false positives (Figure 14).
Figure 14. Calculations: Buildings With Laplace Filter (Yellow) Before Damage (Green), Using 14% Threshold
As a result, we can see a very impressive new mask that describes houses that have survived the hurricane (Figure 15) and a vector of the ruined buildings (Figure 16).
Figure 15. Before and After Disaster With Laplace Filter
Figure 16. Destroyed Buildings With Laplace Filter
After we have found the ruined houses, we can also pinpoint their location. For this task OpenStreetMap comes in handy. We have installed an OSM plugin in QGis and added an OSM layer to the canvas (Figure 17). Then, we added a layer with the destroyed houses and we can see all their addresses. If we want to get a file with the full addresses of the destroyed buildings we have to:
In QGis use Vector / OpenStreetMap / Download the data and select the images with the desired information.
Then in QGis use Vector / OpenStreetMap / Import a topology from XML and generate a DataBase from the area of interest.
QGis / Vector / Export the topology to Spatialite and select all the required attributes. (Figure 18)
Figure 17. Destroyed Houses Location
Figure 18. Required Attributes Selection To Load Vector Into Ruined Buildings
As a result, we can get a full list, with addresses, of the destroyed buildings (Figure 19).
Figure 19. Address List of Ruined Houses
If we compare these two different approaches to building recognition, we notice that the CNN-based method has 78% accuracy in detecting destroyed houses, whereas the Laplace filter reaches 96.3% accuracy in recognizing destroyed buildings. As for the recognition of existing buildings, the CNN approach has a 93% accuracy, but the second method has a 97.9 % detection accuracy. So, we can conclude that the flat surface recognition approach is more efficient than the CNN-based method.
The demonstrated method can immediately be very useful and let people compute the extent of damage in a disaster area, including the number of houses destroyed and their locations. This would significantly help while estimating the extent of the damage and provide more precise measurements than currently exist.
OSMDeepOD – OSM and Deep Learning based Object Detection from Aerial Imagery
This is a project about object detection from aerial imagery using open data from OpenStreetMap (OSM) project as massive training data and areal imagery, wordwide or local. This project has been formerly known as “OSM-Crosswalk-Detection”; now ot’s called OSMDeepOD, pronounced “OSM Deep ‘Oh ‘Dee”!
Keywords: Big Data; Data Science; Data Engineering; Machine Learning; Artificial Intelligence; Neuronal Nets; Imagery; Volunteered Geographic Information; Crowdsourcing; Geographic Information Systems; Infrastructure; Parallel Programming.
Introduction
OSM-Crosswalk-Detection is a highly scalable image recognition software for aerial photos (orthophotos). It uses the open source software library TensorFlow, with a retrained Inception V3 neuronal network, to detect crosswalks along streets.
This work started as part of a semester thesis autumn 2015 at Geometa Lab, University of Applied Sciences Rapperswil (HSR).
Overview
Process
Getting Started
Prerequisites
PythonAt the moment, we support python 3.x
DockerIn order to use volumes, I recommend using docker >= 1.9.x
Bounding Box of area to analyzeTo start the extraction of crosswalks within a given area, the bounding box of this area is required as arguments for the manager. To get the bounding box the desired area, you can use https://www.openstreetmap.org/export to select the area and copy paste the corresponding coordinates. Use the values in the following order when used as positional arguments to manager: left bottom right top
Usage
The simplest way to use the detection process is to clone the repository and build/start the docker containers.
git clone https://github.com/geometalab/OSM-Crosswalk-Detection.git
cd OSM-Crosswalk-Detection/dockerfiles/
sudo python docker_run.py -r -d
After the previous shell commands you have started a redis instance for data persistance and a container for the detection process. Now you should be connected to a tty of the crosswalk_detection container. If you have a nvida GPU and nvidia-docker installed the detection algorithm will automatically use this GPU1.
To start the detection process use the src/role/main.py2 script.
Use the manger option to select the detection area and generate the jobs stored by the redis instance
If you have execute the result worker in the docker container you can move the crosswalks.json file to the /crosswalk/ directory which is map to your host.
Own Orthofotos
To use your own Orthofotos you have to do the following steps:
1. Add a new directory to src/data/orthofoto
2. Add a new module to the directory with the name: 'your_new_directory'_api.py
3. Create a class in the module with the name: 'Your_new_directory'Api (First letter needs to be uppercase)
4. Implement the function 'def get_image(self, bbox):' and returns a pillow image of the bbox
5. After that you can use your api with the parameter --orthofots 'your_new_directory'
If you have problems with the implementation have a look at the wms or other example.
Dataset
During this work, we have collected our own dataset with swiss crosswalks and non-crosswalks. The pictures have a size of 50×50 pixels and are available by request.
1: The crosswalk_detection container is based on the nvidia/cuda:7.5-cudnn4-devel-ubuntu14.04 image, may you have to change the base image for your GPU. ↩2: For more information about the main.py use the -h option. ↩
2 hours ago Node.js · best of 20162016 was an exciting year for Node.js developers. I mean – just take a look at this picture:
Looking back through the 6-year-long history of Node.js, we can tell that our favorite framework has finally matured to be used by the greatest enterprises, from all around the world, in basically every industry.
Another great news is that Node.js is the biggest open source platform ever – with 15 million+ downloads/month and more than a billion package downloads/week. Contributions have risen to the top as well since now we have more than 1,100 developers who built Node.js into the platform it is now.
To summarize this year, we collected the 10 most important articles we recommend to read. These include the biggest scandals, events, and improvements surrounding Node.js in 2016.
Programmers were shocked looking at broken builds and failed installations after Azer Koçulu unpublished more than 250 of his modules from NPM in March 2016 –breaking thousands of modules, including Node and Babel.
Koçulu deleted his code because one of his modules was called Kik – same as the instant messaging app – so the lawyers of Kik claimed brand infringement, and then NPM took the module away from him.
“This situation made me realize that NPM is someone’s private land where corporate is more powerful than the people, and I do open source because Power To The People.” – Azer Koçulu
One of Azer’s modules was left-pad, which padded out the lefthand-side of strings with zeroes or spaces. Unfortunately, 1000s of modules depended on it..
In October 2016, Facebook & Google launched Yarn, a new package manager for JavaScript.
The reason? There were a couple of fundamental problems with npm for Facebooks’s workflow.
At Facebook’s scale npm didn’t quite work well.
npm slowed down the company’s continuous integration workflow.
Checking all of the modules into a repository was also inefficient.
npm is, by design, nondeterministic — yet Facebook’s engineers needed a consistent and reliable system for their DevOps workflow.
Instead of hacking around npm’s limitations, Facebook wrote Yarn from the scratch:
Yarn does a better job at caching files locally.
Yarn is also able to parallelize some of its operations, which speeds up the install process for new modules.
Yarn uses lockfiles and a deterministic install algorithm to create consistent file structures across machines.
For security reasons, Yarn does not allow developers who write packages to execute other code that’s needed as part of the install process.
Yarn, which promises to even give developers that don’t work at Facebook’s scale a major performance boost, still uses the npm registry and is essentially a drop-in replacement for the npm client.
You can read the full article with the details on TechCrunch.
Jonathan Zarra, the creator of GoChat for Pokémon GO reached 1 million users in 5 days.Zarra had a hard time paying for the servers (around $4,000 / month) that were necessary to host 1M active users.
He never thought to get this many users. He built this app as an MVP, caring about scalability later. He built it to fail.
Zarra was already talking to VCs to grow and monetize his app. Since he built the app as an MVP, he thought he can care about scalability later.
He was wrong.
Thanks to it’s poor design, GoChat was unable to scale to this much users, and went down.A lot of users lost with a lot of money spent.
500,000 users in 5 days on $100/month server
Erik Duindam, the CTO of Unboxd has been designing and building web platforms for hundreds of millions of active users throughout his whole life.
Frustrated by the poor design and sad fate of Zarra’s GoChat, Erik decided to build his own solution, GoSnaps: The Instagram/Snapchat for Pokémon GO.
Erik was able to build a scalable MVP with Node.js in 24 hours, which could easily handle 500k unique users.
The whole setup ran on one medium Google Cloud server of $100/month, plus (cheap) Google Cloud Storage for the storage of images – and it was still able to perform exceptionally well.
How did he do it? Well, you can read the full story for the technical details:
This tutorial helps you to use RabbitMQ to coordinate work between work producers and work consumers.
Unlike Redis, RabbitMQ’s sole purpose is to provide a reliable and scalable messaging solution with many features that are not present or hard to implement in Redis.
RabbitMQ is a server that runs locally, or in some node on the network. The clients can be work producers, work consumers or both, and they will talk to the server using a protocol named Advanced Messaging Queueing Protocol (AMQP).
James M Snell, IBM Technical Lead for Node.js attended his first TC-39 meeting in late September.
The reason?
One of the newer JavaScript language features defined by TC-39 — namely, Modules — has been causing the Node.js core team a bit of trouble.
James and Bradley Farias (@bradleymeck) have been trying to figure out how to best implement support for ECMAScript Modules (ESM) in Node.js without causing more trouble and confusion than it would be worth.
Because of the complexity of the issues involved, sitting down face to face with the members of TC-39 was deemed to be the most productive path forward.
The full article discusses what they found and understood from this conversation.
We at Trace by RisingStack conducted a survey during 2016 Summer to find out how developers use Node.js.
The results show that MongoDB, RabbitMQ, AWS, Jenkins, Docker and Amazon Container Services are the go-to choices for developing, containerizing and shipping Node.js applications.
The results also tell Node developers major pain-point: debugging.
The Node Foundation announced at Node.js Interactive North America that it will oversee the Node.js Security Project which was founded by Adam Baldwin and previously managed by ^Lift.
As part of the Node.js Foundation, the Node.js Security Project will provide a unified process for discovering and disclosing security vulnerabilities found in the Node.js module ecosystem. Governance for the project will come from a working group within the foundation.
The Node.js Foundation will take over the following responsibilities from ^Lift:
Maintaining an entry point for ecosystem vulnerability disclosure;
Maintaining a private communication channel for vulnerabilities to be vetted;
Vetting participants in the private security disclosure group;
Facilitating ongoing research and testing of security data;
Owning and publishing the base dataset of disclosures, and
Defining a standard for the data, which tool vendors can build on top of, and security and vendors can add data and value to as well.
You can read the full article discussing every detail on The New Stack.
The Node.js Maturity Checklist gives you a starting point to understand how well Node.js is adopted in your company.
The checklist follows your adoption trough establishing company culture, teaching your employees, setting up your infrastructure, writing code and running the application.