Quantcast
Channel: 未分类 –懒得折腾
Viewing all 759 articles
Browse latest View live

Immutable.js及在React中的应用

$
0
0

1. 为什么需要Immutable.js

1.1 引用带来的副作用

Shared mutable state is the root of all evil(共享的可变状态是万恶之源)

javascript(es5)中存在两类数据结构: primitive value(string、number、boolean、null、undefined)、object(reference)。在编译型语言(例如java)也存在object,但是js中的对象非常灵活、多变,这给我们的开发带来了不少好处,但是也引起了非常多的问题。

业务场景1:

var obj = {
  count: 1
};
var clone = obj;
clone.count = 2;

console.log(clone.count) // 2
console.log(obj.count) // 2

业务场景2:

var obj = {
 count: 1
};

unKnownFunction(obj);
console.log(obj.count) // 不知道结果是多少?

1.2 深度拷贝的性能问题

针对引用的副作用,有人会提出可以进行深度拷贝(deep clone), 请看下面深度拷贝的代码:

function isObject(obj) {
  return typeof obj === 'object';
}

function isArray(arr) {
  return Array.isArray(arr);
}
function deepClone(obj) {
  if (!isObject(obj))  return obj;
  var cloneObj = isArray(obj) ? [] : {};

  for(var key in obj) {
    if (obj.hasOwnProperty(key)) {
      var value = obj[key];
      var copy = value;

      if (isObject(value)) {
        cloneObj[key] = deepClone(value);
      } else {
        cloneObj[key] = value;
      }
    }
  }
  return cloneObj;
}

var obj = {
  age: 5,
  list: [1, 2, 3]
};

var obj2 = deepClone(obj)
console.log(obj.list === obj2.list) // false

假如仅仅只是对obj.age进行操作,使用深度拷贝同样需要拷贝list字段,而两个对象的list值是相同的,对list的拷贝明显是多余,因此深度拷贝存在性能缺陷的问题。

var obj = {
  age: 5,
  list: [1, 2, 3]
};
var obj2 = deepClone(obj)
obj2.age = 6;
// 假如仅仅只对age字段操作,使用深度拷贝(deepClone函数)也对list进行了复制,
// 这样明显是多余的,存在性能缺陷

1.3 js本身的无力

在js中实现数据不可变,有两个方法: const(es6)、Object.freeze(es5)。但是这两种方法都是shallow处理,遇到嵌套多深的结构就需要递归处理,又会存在性能上的问题。

2. Immutable的优点

2.1 Persistent data structure

Immutable.js提供了7种不可变的数据类型: ListMap Stack OrderedMap Set OrderedSet Record。对Immutable对象的操作均会返回新的对象,例如:

var obj = {count: 1};
var map = Immutable.fromJS(obj);
var map2 = map.set('count', 2);

console.log(map.get('count')); // 1
console.log(map2.get('count')); // 2

关于Persistent data structure 请查看 wikipedia

2.2 structural sharing

当我们对一个Immutable对象进行操作的时候,ImmutableJS基于哈希映射树(hash map tries)和vector map tries,只clone该节点以及它的祖先节点,其他保持不变,这样可以共享相同的部分,大大提高性能。

var obj = {
  count: 1,
  list: [1, 2, 3, 4, 5]
}
var map1 = Immutable.fromJS(obj);
var map2 = map1.set('count', 2);

console.log(Immutable.is(map1.list, map2.list)); // true

从网上找一个图片来说明结构共享的过程:

2.3 support lazy operation

ImmutableJS借鉴了Clojure、Scala、Haskell这些函数式编程语言,引入了一个特殊结构Seq(全称Sequence), 其他Immutable对象(例如ListMap)可以通过toSeq进行转换。

Seq具有两个特征: 数据不可变(Immutable)、计算延迟性(Lazy)。在下面的demo中,直接操作1到无穷的数,会超出内存限制,抛出异常,但是仅仅读取其中两个值就不存在问题,因为没有对map的结果进行暂存,只是根据需要进行计算。

Immutable.Range(1, Infinity)
.map(n => -n)
// Error: Cannot perform this action with an infinite size.

Immutable.Range(1, Infinity)
.map(n => -n)
.take(2)
.reduce((r, n) => r + n, 0);
// -3

2.4 强大的API机制

ImmutableJS的文档很Geek,提供了大量的方法,有些方法沿用原生js的类似,降低学习成本,有些方法提供了便捷操作,例如setInUpdateIn可以进行深度操作。

var obj = {
  a: {
    b: {
      list: [1, 2, 3]
    }
  }
};
var map = Immutable.fromJS(obj);
var map2 = Immutable.updateIn(['a', 'b', 'list'], (list) => {
  return list.push(4);
});

console.log(map2.getIn(['a', 'b', 'list']))
// List [ 1, 2, 3, 4 ]

3. 在React中的实践

3.1 快 – 性能优化

React是一个UI = f(state)库,为了解决性能问题引入了virtual dom,virtual dom通过diff算法修改DOM,实现高效的DOM更新。

听起来很完美吧,但是有一个问题: 当执行setState时,即使state数据没发生改变,也会去做virtual dom的diff,因为在React的声明周期中,默认情况下shouldComponentUpdate总是返回true。那如何在shouldComponentUpdate进行state比较?

React的解决方法: 提供了一个PureRenderMixin, PureRenderMixinshouldComponentUpdate方法进行了覆盖,但是PureRenderMixin里面是浅比较:

var ReactComponentWithPureRenderMixin = {
  shouldComponentUpdate: function(nextProps, nextState) {
    return shallowCompare(this, nextProps, nextState);
  },
};

function shallowCompare(instance, nextProps, nextState) {
  return (
    !shallowEqual(instance.props, nextProps) ||
    !shallowEqual(instance.state, nextState)
  );
}

浅比较只能进行简单比较,如果数据结构复杂的话,依然会存在多余的diff过程,说明PureRenderMixin依然不是理想的解决方案。

Immutable来解决: 因为Immutable的结构不可变性&&结构共享性,能够快速进行数据的比较:


shouldComponentUpdate: function(nextProps, nextState) {
  return deepCompare(this, nextProps, nextState);
},

function deepCompare(instance, nextProps, nextState) {
    return !Immutable.is(instance.props, nextProps) ||
        !Immutable.is(instance.state, nextState);
}
    

3.2 安全 – state操作的安全

当我们在React中执行setState的时候,需要注意的,state merge过程是shallow merge:

getInitState: function () {
  return {
    count: 1,
    user: {
      school: {
        address: 'beijing',
        level: 'middleSchool'
      }
    }
  }
},
handleChangeSchool: function () {
  this.setState({
    user: {
      school: {
        address: 'shanghai'
      }
    }
  })
}
render() {
  console.log(this.state.user.school);
  // {address: 'shanghai'}
}

为了让大家安心,贴上React中关于state merge的源码:

// 在 ReactCompositeComponent.js中完成state的merge,其中merger的方法来源于
// `Object.assign`这个模块
function assign(target, sources) {
  ....
  var to = Object(target);
  ...
  for (var nextIndex = 1; nextIndex < arguments.length; nextIndex++) {
    var nextSource = arguments[nextIndex];
    var from = Object(nextSource);
    ...
    for (var key in from) {
      if (hasOwnProperty.call(from, key)) {
        to[key] = from[key];
      }
    }
  }
  return to
}

3.3 方便 – 强大的API

ImmutableJS里面拥有强大的API,并且文档写的很Geek,在对state、store进行操作的时候非常方便。

3.4 历史 – 实现回退

可以保存state的每一个状态,并保证该状态不会被修改,这样就可以实现历史记录的回退。

4. React中引入Immutable.js带来的问题

  • 源文件过大: 源码总共有5k多行,压缩后有16kb
  • 类型转换: 如果需要频繁地与服务器交互,那么Immutable对象就需要不断地与原生js进行转换,操作起来显得很繁琐
  • 侵入性: 例如引用第三方组件的时候,就不得不进行类型转换;在使用react-redux时,connect的shouldComponentUpdate已经实现,此处无法发挥作用。

参考



Serverless IoT With Particle And Amazon Web Services

$
0
0

Serverless IoT With Particle And Amazon Web Services

The internet of things, or IoT for short, has seen tremendous activity and interest this past year. From enterprises, to healthcare providers, and even automobile companies, the idea of a connected world is beginning to appeal more and more to everyday consumers and businesses alike. This tectonic shift in the way the things around us operate has opened the door to a plethora of new and exciting products, and placed cloud service platforms, such as Amazon Web Services (AWS), at the forefront of scalable support for these new technologies.

Sometime last year I picked up a Particle Photon; a tiny, low powered wifi chip that’s intended for rapid prototyping around IoT projects. The small board, about the side of an eraser, utilized a cloud based development toolset to update the “firmware” of the devices with your custom C++ code. The SDK for the device gives a developer access to a number of pins that allows for extensive capabilities and enhancements.

In this post, I outline the steps necessary to create a simple data capturing device, in this case a sleep sensor, that sends the information collected to AWS for processing and storage. This technique can be used for a number of entry level IoT projects, and all of the tools included in this write up are “off the shelf” and available to everyday consumers.

Particle Photon + AWS

AWS and Particle In Action

AWS and Particle In Action

One of the more impressive features of the Particle platform is its built in webhook integration. The wifi connected circuit boards have the ability to send data to, and receive data from, webhooks setup through the Particle service. This capability opens the door to a number of interesting solutions, including out-of-the-box integrations with IFTTT.

This webhook integration also allows for interacting with custom API endpoints, developed specifically for processing the data that is generated by the Particle. I set out to create a sleep sensor, leveraging an accelerometer in conjunction with the Particle and AWS to process the “quality” of my sleep throughout the night, sending minute-by-minute data about my movements to the cloud for processing. The following section will explain how this worked, and what services must be leveraged to make this possible.

The AWS Configuration

For this project, I used three AWS services; a DynamoDB table to store the information captured, a Lambda function to process the data from the Particle and write it to the table and a API Gateway to publicly expose the Lambda function and perform some authentication before allowing the data to be written. This section will explain how to configure these services.

DynamoDB Configuration

DynamoDB Configuration

DynamoDB is a NoSQL storage service that is great for IoT projects. It’s extremely inexpensive, and performance is fantastic. For this project I created a single table named SleepData with a Primary Partition Key of deviceID and a Primary Sort Key of published_at set to a number. This will allow us to scan or query our DynamoDB table for specific Photon devices at specific time periods when it comes time to read the data.

After creating the DynamoDB table, I began configuring the Lambda function. The function is intended to read a JSON payload delivered via the API Gateway, sanitize and normalize the data, and store it in the DynamoDB table. I also included some logic in the function to capture when it should be recording this information and when it shouldn’t, so I’m only storing information while I’m actually asleep.

https://gist.github.com/mlapida/60748a378d4986170b6f.js","resolvedBy":"manual","resolved":true}” data-block-type=”22″>
from __future__ import print_function
import logging
import boto3
from datetime import *
from boto3.dynamodb.conditions import Key, Attr
# enable basic logging to CloudWatch Logs
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# setup the DynamoDB table
dynamodb = boto3.resource(dynamodb)
table = dynamodb.Table(SleepData)
# setup conversion to epoch
epoch = datetime.utcfromtimestamp(0)
now = datetime.now()
def lambda_handler(event, context):
# determine if the user is asleep
sleepState = sleepCheck()
# if the user has triggered the buttons, perform some logic
if event[data] == True:
# if the user is awake, toggle sleep. If not, visa versa
if sleepState == awake:
table.put_item(
Item={
event_name: event[name],
published_at: int(unix_time_millis(datetime.strptime(event[published_at], %Y-%m-%dT%H:%M:%S.%fZ))),
data: true,
state: asleep,
deviceID : event[source]
}
)
else:
table.put_item(
Item={
event_name: event[name],
published_at: int(unix_time_millis(datetime.strptime(event[published_at], %Y-%m-%dT%H:%M:%S.%fZ))),
data: true,
state : awake,
deviceID : event[source]
}
)
else:
# if the user is a sleep, and the buttons weren’t pressed
# send the data to dynamodb
if sleepState == asleep:
table.put_item(
Item={
event_name: event[name],
# convert the date/time to epoch for storage in the table
published_at: int(unix_time_millis(datetime.strptime(event[published_at], %Y-%m-%dT%H:%M:%S.%fZ))),
data: int(event[data]),
state: asleep,
deviceID : event[source]
}
)
else:
print(Not asleep)
print(event)
return(Success!)
# a function to convert the time to epoch
def unix_time_millis(dt):
return (dt epoch).total_seconds() * 1000.0
# a function to check if the user is currernly “sleeping”
def sleepCheck():
fe = Key(data).eq(true)
pe = published_at, deviceID, #da, #st
ean = { #da: data, #st: state}
esk = None
response = table.scan(
FilterExpression=fe,
ProjectionExpression=pe,
ExpressionAttributeNames=ean
)
x = len(response[Items])
y = 0
for i in response[Items]:
y = y + 1
if y == x:
return(str(i[state]))

Finally, I configured the API Endpoint to send data to the Lambda function. I wanted to limit who was capable of sending the information, so I also required an API Key for all requests. You can link your Lambda function to an API Gateway by navigating to the API endpoints section of your Lambda function in the AWS console. You can then create a new API gateway and set it up accordingly. There is an excellent tutorial on API Gateway and Lambda in the AWS documentation.

For this API Gateway, we’ll want to make sure we have the Method set to PUT andSecurity set to Open with access key. This will ensure that the data we send the API Gateway is secured, and that no one other than those with an access key can send data to our endpoint.

That about sums up what’s going on on the AWS side. In a future project I’ll explore processing the data stored in the DynamoDB table. For now, we’re only concerned with capturing it for future use.

The Particle Photon Configuration

For this project, I leveraged the “Internet Button” shield, which comes packed with an accelerometer, four buttons and a number of LED’s. This gave me the capability to interact with the API, turning data recording on and off and giving me some visual feedback when I triggered an action, such as setting it to sleep, on the device.

In the world of Particle, a webhook is a published endpoint, attached to your account, that contains the information needed to send data to an external API. Setting this up can be a bit of a challenge, as you’ll need to perform the operation through the CLI, and there’s not a whole lot in the form of feedback when something is setup wrong. The following steps can be used to configure a Particle webhook with an AWS API Gateway.

https://gist.github.com/mlapida/6c0646a60a3ddcc661ad.js","resolvedBy":"manual","resolved":true}” data-block-type=”22″>
{
event: sendSleep,
url: https://%5Byourendpoint%5D.execute-api.us-east-1.amazonaws.com/prod/ParticleSleepV1,
requestType: Post,
headers: {
x-api-key : [yourapikey]
},
json: {
name: {{SPARK_EVENT_NAME}},
data: {{SPARK_EVENT_VALUE}},
source: {{SPARK_CORE_ID}},
published_at: {{SPARK_PUBLISHED_AT}}
},
mydevices: true,
noDefaults: true
}

The first thing you’ll need to do is install the Particle CLI and log in. Luckily,Particle has put together a great resource for setting this up. After getting the CLI configured, you’ll need to create a JSON definition file for the webhook. This file contains the API keys sent in the header, a template for the data being sent, and the URL of the endpoint. I have a sample of this file above. Finally, from the CLI, you’ll need to add the webhook API:

particle webhook create SleepAPI.json

This will associate the API, as it is named in the JSON file, with your account. To send data to the API, you’ll use the “Particle.publish()” function from your code snipped with the API name as the first attribute. More information on Particles implementation of Webhooks can be found in their documentation.

The small block of code used for the Particle loops through every 100 milliseconds and capture any movement that took place. I’m using a special function, specific for the Internet Button, that returns the “lowest LED” during each loop. If the “lowest LED” has changed since the previous loop, I record that as a single movement. After one full minute of looping, the total is sent to my AWS API Gateway via the Particle webhook functionality.

https://gist.github.com/mlapida/579affd5395b8cc74eb9.js","resolvedBy":"manual","resolved":true}” data-block-type=”22″>
// Make sure to include the spcial library for the internet button
#include InternetButton/InternetButton.h
// Create a Button named b. It will be your friend, and you two will spend lots of time together.
InternetButton b = InternetButton();
int ledOldPos = 0;
char ledPosTrust[5];
int moveCount = 0;
int loopCount =0;
// The code in setup() runs once when the device is powered on or reset. Used for setting up states, modes, etc
void setup() {
// Tell b to get everything ready to go
// Use b.begin(1); if you have the original SparkButton, which does not have a buzzer or a plastic enclosure
// to use, just add a ‘1’ between the parentheses in the code below.
b.begin();
}
/* loop(), in contrast to setup(), runs all the time. Over and over again.
Remember this particularly if there are things you DON’T want to run a lot. Like Spark.publish() */
void loop() {
// Load up the special “lowestLed” object
int ledPos = b.lowestLed();
// Turn the LEDs off so they don’t all end up on
b.allLedsOff();
// I’m movin’, incriment the counter up
if (ledOldPos != ledPos){
sprintf(ledPosTrust,%d,ledPos);
moveCount++;
}
//The button’s have been triggered! Record this!
if ((b.buttonOn(2) and b.buttonOn(4)) or (b.buttonOn(1) and b.buttonOn(3))) {
b.ledOn(3, 0, 255, 0); // Green
b.ledOn(9, 0, 255, 0); // Green
Particle.publish(sendSleep, True, 60, PRIVATE);
delay(500);
}
// if we’ve looped through x times, fire off the webhook
if (loopCount >= 600){
Particle.publish(sendSleep, String(moveCount), 60, PRIVATE);
moveCount = 0;
loopCount = 0;
}
loopCount++;
ledOldPos = ledPos;
// Wait a mo’
delay(100);
}
view rawsleepsensor.cpp hosted with ❤ by GitHub

The Results

The end result of tying all of these services together is a pretty robust, low cost IoT platform. I was able to create a prototype in a few hours, using off the shelf products and a bit of connectivity code sprinkled throughout. While my example is a movement “sleep” tracker, it’s easy to see how this type of IoT design can be used in a number of applications.

A Night of Sleep Data

A Night of Sleep Data

The chart above is a sample of data extracted from the DynamoDB table after a night of sleep. The data captured is complete, and an exciting first step in the creation of my roll-your-own sleep tracker. There’s still plenty of work to do when it comes to processing the information, but the initial results are inspiring. The serverless architecture is quickly becoming a viable reality.


React Native + Meteor Boilerplate

$
0
0

React Native + Meteor Boilerplate

Over the last few weeks I’ve been writing a weekly blog post about various aspects of developing React Native applications that interface with a Meteor backend.

Here’s a basic series related to authentication:

  1. Easily Connect React Native to a Meteor Server
  2. Meteor Authentication from React Native
  3. Password Hashing for Meteor React Native

I’ve really enjoyed the process of research, writing, and the feedback I’ve gotten. I want to continue this process for the foreseeable future but I’ve found myself building and rebuilding a fair amount of boilerplate code.

So this week I looked at some larger React Native projects, from the community and projects I’ve been involved with and started to pull out the strong points. I want this project to serve as a simple starting point for React Native + Meteor applications that works on both iOS and Android.

So, let’s collaborate and make this a great starting point for future tutorials and projects. You can view the project on Github.

Want these walkthroughs and tutorials emailed to you as soon as they come out? Sign up for my email list below and I’ll send them to you!


Intro to Debugging React Native (iOS and Android)

$
0
0

Intro to Debugging React Native (iOS and Android)

Debugging a React Native app, while similar to the web, is a bit different. Once you get the hang of it and know the tools it’s simple. This small guide is intended to reduce that learning curve.

Opening the Debug Menu on iOS

Simulator

  • cmd + D
  • cmd + ctrl + Z
  • Hardware > Shake Gesture (in the simulator menu)
    iOS Menu for Shake Gesture

Physical Device

  • Shake your device

Opening the Debug Menu on Android

Simulator

  • cmd + m (Genymotion)
  • cmd + shift + r via Frappe
  • Press the Hardware Menu ButtonAndroid Menu for Hardware Button

Physical Device

  • Shake your device
  • Press the Hardware Menu Button

Debugging on an iOS Device

  • Change localhost to your computer’s IP address in AppDelegate.m – the line that looks like jsCodeLocation = [NSURL URLWithString:@"http://localhost:8081/index.ios.bundle?platform=ios&dev=true"];
  • Change localhost to your computer’s IP address in node_modules/react-native/Libraries/WebSocket/RCTWebSocketExecutor.md

Also, for anyone using the React Native Meteor Boilerplate you’ll have to change localhost to computer’s IP address for the ddp config.

Debugging on an Android Device

WARNING: I don’t have an Android device so I haven’t been able to test this myself. I’m pulling these instructions directly from React Native’s documentation. If someone can test this or clarify the steps necessary I (and I’m sure many others) would be very grateful for your assistance.

“If you’re running Android 5.0+ device connected via USB you can use adb command line tool to setup port forwarding from the device to your computer. For that run: adb reverse tcp:8081 tcp:8081 (see this link for help on adb command). Alternatively, you can open dev menu on the device and select Dev Settings, then update Debug server host for device setting to the IP address of your computer.”


Exploring the React Native Debug Menu

React Native Debug Menu

Reload

This allows you to reload the Javascript of your app. You can accomplish the same thing by pressing cmd + R.

Enable/Disable Chrome Debugging

This opens up a debugging window in Chrome at localhost:8081/debugger-ui. You can open up the actual console via cmd + option + I. Until React Devtools are fixed (discussed later) the chrome debugger is really only used for access to the console.

Enable/Disable Live Reload

Rather than having to constantly manually reload changes to your app you can set up Live Reload in React Native. This will reload your app’s Javascript any time you save a .js file in your app.

Start/Stop Systrace

This is one I admittedly don’t have any experience with. I believe it is used for profiling Android UI performance based on this section in the docs. If you have a better understanding of this tool please let me know!

Show/Hide Inspector

React Native InspectorThis allows you to get a similar experience to what you may be used to in web debugging, such as the “Elements” tab in Chrome. You can choose a component in your device and see some of the properties that are assigned to it. You can also access this via the keyboard shortcut cmd + I.

Show Perf Monitor

React Native Perf MonitorThis tool allows you to monitor the performance of your app through various detailed metrics. If you’re ever wondering why the performance of your app is poor, such as a jittery UI, this tool will be your friend. Here is an article that may help you work through some of your performance issues.


Debugging your Code with Google Chrome

Just like with in the browser you can place debugger; in your code to inspect. Here’s an article from Google that gives an overview of what you can do with the menu when using debugger;

When do you have to rebuild the app?

Whenever native code (Objective C or Java) changes or you’re adding new resources. The React Native packager will listen to changes in your Javascript code and recompile those for your RN app to use but this does NOT apply to native code that you may edit.

Why doesn’t React Devtools work for React Native?

At one point in time React Devtools worked for React Native. Due to various changes this integration is currently, as of 3/1/2016, broken. You can follow along with the issue here: https://github.com/facebook/react-devtools/issues/229.

I hope you found this helpful! If you’ve got any other tips for debugging React Native please let me know (@spencer_carli) so I can update this article with your tip!


State of the Art JavaScript in 2016

$
0
0

State of the Art JavaScript in 2016

Image “Question!” by Stefan Baudy, CC BY 2.0

So, you’re starting a brand new JavaScript front end project or overhauling an old one, and maybe you haven’t kept up with the breakneck pace of the ecosystem. Or you did, but there’s too many things to choose from. React, Flux, Angular, Aurelia, Mocha, Jasmine, Babel, TypeScript, Flow, oh my! By trying to make things simpler, some fall into a trap captured by one of my favorite XKCD comics.

Well, the good news is the ecosystem is starting to slow down. Projects are merging. Best practices are starting to become clear. People are building on top of existing stuff instead of building new frameworks.

As a starting point, here’s my personal picks for most pieces of a modern web application. Some choices are likely controversial and I will only give basic reasoning behind each choices. Keep in mind they’re mostly my opinion based on what I’m seeing in the community and personal experiences. Your mileage may vary.

Core library: React

The clear winner right now, is React.

  • Components all the way down makes your application much easier to reason about.
  • The learning curve is very flat. The important APIs fit would fit on one page.
  • JSX is awesome. You get all the power of JavaScript and its tooling when writing your markup.
  • It is the natural match for Flux and Redux (more on that later).
  • The React community is amazing, and produced many best of breed tools such as Redux (also more on that later).
  • Writing high quality data flow is much easier in large applications than dealing with 2 way data binding (eg: Knockout)
  • If you ever need to do server side rendering, React is where it’s at.

There’s plenty of monolithic frameworks like Ember, Aurelia and Angular that promise to take care of everything, but the React ecosystem, while requiring a few more decisions (that’s why you’re reading this!), is much more robust. Many of these frameworks, such as Angular 2.0, are playing catch up with React.

Picking React isn’t a technology decision, it’s a business decision.

Bonus points: Once you start working on your mobile apps, you’ll be ready for it thanks to React Native.

Application life cycle: Redux

This isn’t the final logo!

Now that we have our view and component layer, we need something to manage state and the lifecycle of our application. Redux is also a clear winner here.

Alongside React, Facebook presented a design pattern for one way data flow called Flux. Flux largely delivered on its promise of simplifying state management, but it also brought with it more questions, such as how to store that state and where to do Ajax requests.

To answer those questions, countless frameworks were built on top of the Flux pattern: Fluxible, Reflux, Alt, Flummox, Lux, Nuclear, Fluxxor and many, many more.

One Flux-like implementation eventually caught the community’s attention, and for good reasons: Redux.

In Redux, almost all of the moving parts are pure functions. There is one centralized store and source of truth. Reducer functions are responsible for manipulating data that makes up the store. Everything is much clearer than in vanilla Flux.

More importantly, learning Redux is a snap. Redux’s author, Dan Abramov is a fantastic teacher, and his training videos are fantastic. Watch the videos, become a Redux expert. I’ve seen a team of engineers go from nearly zero React experience to having a production ready application with top notch code in a few weeks.

Redux’s ecosystem is as top notch as Redux itself. From the nearly magicaldevtool to the amazing memoization utility reselect, the Redux community got your back.

One thing to be careful is the natural instinct to try and abstract away the Redux boilerplate. There’s good reasons behind all those pieces. Make sure you tried it and understand the “why” before trying to blindly improve on it.

Language: ES6 with Babel. No types (yet)

Avoid CoffeeScript. Most of its better features are now in ES6, a standard. Tooling (such as CoffeeLint) is very weak. Its community is also rapidly declining.

ES6 is a standard. Most of it is supported in the latest version of major browsers. Babel is an amazing “pluggable” ES6 compiler. Configure it with the right presets for your target browsers and you’re good to go.

What about types? TypeScript and Flow both offer ways to add static typing to JavaScript, enhancing tooling and catching bugs without needing tests. With that said, I suggest a wait and see approach for now.

TypeScript tries too hard to make JavaScript like C# or Java, lacking on modern type system features such as algebraic data types (and you really want those if you’re going to do static types!). It also doesn’t handle nulls as well as Flow.

Flow can be much more powerful, catching a wider variety of bugs, but it can be hard to setup. It’s also behind Babel in terms of language features and has poor Windows support.

I’ll say something controversial: Types are not nearly as critical to front end development as some will have you believe (the whole argument will have to be in a future blog post). Wait until the type systems are more robust and stick to Babel for now, keeping an eye on Flow as it matures.

Linting & style: ESLint with AirBNB

Another clear winner ESLint. With its React plugin and awesome ES6 support, one could not ask for more of a linter. JSLint is dated. ESlint does what theJSHint + JSCS combo does in a single tool.

You do have to configure it with your style preferences. I highly recommendAirBNB’s styleguide, most of which can be enforced via the ESlint airbnb config. If your team is the kind that will argue on code style, just use this style guide as gospel and the end all be all of arguments. It isn’t perfect, but the value of having consistent code is highly underestimated.

Once you’re comfortable with it, I’d suggest enabling even more rules. The more that can be caught while typing (with an ESlint plugin for your favorite editor), the less decision fatigue you’ll have and the more productive you and your team will be.

Dependency management: It’s all about NPM, CommonJS and ES6 modules

This one is easy. Use NPM. For everything. Forget about Bower. Build tools such as Browserify and Webpack brings NPM’s power to the web. Versioning is handled easily and you get most of the Node.js ecosystem. Handling of CSS is still less than optimal though.

One thing you’ll want to consider is how to handle building on your deployment server. Unlike Ruby’s Bundler, NPM uses wildcard versions, and packages can change between the time you finish coding and you start deploying. Use a shrinkwrap file to freeze your dependencies (I recommend using Uber’s shrinkwrap to get more consistent output). Also consider hosting your own private NPM server using something like Sinopia.

Babel will compile ES6 module syntax to CommonJS. You’ll get a future proof syntax, and the benefits of static code analysis, such as tree shaking when using a build tool that supports it (Webpack 2.0 or Rollup).

Build tool: Webpack

Unless you fancy adding hundreds of script tags to your pages, you need a build tool to bundle your dependencies. You also need something to allow NPM packages to work in browsers. This is where Webpack comes in.

A year ago you had a lot of potential options here. Your environment, such as Rails’ sprockets could do it. RequireJS, Browserify and Webpack were the JavaScript based solutions. Now, RollupJS promises to handle ES6 modules optimally.

After trying them all, I highly recommend Webpack:

  • It is more opinionated yet can be configured to handle even the craziest scenarios.
  • All main module formats (AMD, CommonJS, globals) are supported.
  • It has features to fix broken modules.
  • It can handle CSS.
  • It has the most comprehensive cache busting/hashing system (if you push your stuff to CDNs).
  • It supports hot reload out of the box.
  • It can load almost anything.
  • It has an impressive list of optimizations.

Webpack is also by far the best to handle extremely large SPA applications with built in code splitting and lazy loading.

Be warned that the learning curve is brutal! But once you get it, you’ll be rewarded with the best build system available.

But what about Gulp or Grunt? Webpack is much better at processing assets. They can still be useful if you need to run other kind of tasks though (usually you won’t). For basic tasks (such as running Webpack or ESlint), I recommend simply using NPM scripts.

Testing: Mocha + Chai + Sinon (but it’s not that simple)

There are a LOT of options for unit testing in JavaScript, and you can’t go wrong with any of them. If you have unit tests, that’s already good!

Some choices are Jasmine, Mocha, Tape and Ava and Jest. I’m sure I’m forgetting some. They all have something they do better than the rest.

My criteria for a test framework are as follow:

  • It should work in the browser for ease of debugging
  • It should be fast
  • It should easily handle asynchronous tests
  • It should be easy to use from the command line
  • It should let me use whatever assertion and mock library I want

The first criteria knocks out Ava (even though it looks awesome) and Jest (auto-mocking isn’t nearly as nice as it sounds, and is very slow anyway).

You can’t really go wrong with Jasmine, Mocha or Tape. I prefer Chai asserts because of all available plugins and Sinon’s mocks to Jasmine’s built in construct, and Mocha’s asynchronous test support is superior (you don’t have to deal with done callbacks). Chai as Promised is amazing. I highly recommend using Dirty Chai to avoid some headaches, though. Webpack’smocha-leader let’s you automatically run tests as you code.

For React specific tooling, look at AirBNB’s Enzyme and Teaspoon (this isn’t the Rails based Teaspoon).

I really enjoy Mocha’s features and support. If you want something more minimalist, read this article about Tape.

Utility library: Lodash is king, but look at Ramda

JavaScript doesn’t have a strong core of utilities like Java or .NET does, so you’ll most likely want to include one.

Lodash is by far the king and contains the entire kitchen sink. It is also one of the most performant, with features such as lazy evaluation. You don’t have to include the whole thing if you don’t want to, either: Lodash lets you include only the functions you use (pretty important considering how large it has become). As of 4.x, Lodash also natively supports an optional “functional” mode for the FP geeks among us.

If you’re into functional programming, however, take a look at the fantasticRamda. If you decide to use it, you might still need to include some Lodash functions (Ramda is focused on data manipulation and functional construct almost exclusively), but you’ll get a lot of the power of functional programming languages in a JavaScript friendly way.

Http requests: Just use fetch!

Many React applications don’t need jQuery at all anymore. Unless you’re working on a legacy application or have 3rd party libraries that depend on it, there’s no reason to include it. That means you need to replace $.ajax.

I like to keep it simple and just use fetch. It’s promise based, it’s built in Firefox and Chrome, and it Just Works ™. For other browsers, you’ll need to include a polyfill. I suggest isomorphic-fetch, to ensure you have all your bases covered, including server side.

There are other good libraries such as Axios, but I haven’t needed much beyond fetch.

For more details about why promises are important, see my post onasynchronous programming.

Styling: Consider CSS modules

This is an area I feel is lagging behind. SASS is the current go to, and node-sass is a great way to use it in your JavaScript project. That said, I feel it’s missing a lot to be a perfect solution. Lack of reference imports (a way to import just variables and mixins from a file, without duplicating selectors) and native URL rewriting makes it harder than needed to keep things lean and clean in production. node-sass is a C library, and will have to be kept in sync with your Node version.

LESS does not suffer from these issues, but has fallen out of favor due to lacking many of SASS’ features.

PostCSS is much more promising, allowing you to kind of “make your own CSS processor”. I’d recommend using it on its own, or even in ADDITION to your preferred processor for things such as AutoPrefixer instead of importing a big library like Bourbon.

One thing worthy of attention though, are CSS modules. CSS modules prevents the “cascading” part of CSS, allowing us to keep our dependencies explicit, and prevents conflict. You’ll never have to worry about overriding classes by accident or having to make ultra explicit names for your classes. It works great with React, too. One drawback: css-loader with CSS modules enabled is REALLY slow, so if you plan on having hundreds of kilobytes of CSS, you may want to avoid it until it gets better.

If I was to start a large project from scratch today, I’d probably just use PostCSS along with pre-compiled versions of my favorite CSS libraries.

Regardless of what you choose, you may want to look at my post on CSS performance with Webpack, especially if you go with SASS.

Universal (Isomorphic) JavaScript: Make sure you need it.

Universal or Isomorphic JavaScript refers to JavaScript that can be used on both the client and the server. This is primarily used to pre-render pages server side for performance and SEO purpose. Thanks to React, what was once only the realm of giants such as Ebay or Facebook is now within reach of most development shops. It is still not “free” though, adding significant complexity and limiting your options in term of libraries and tooling.

If you are building a B2C (Business to Customer) website, such as an e-commerce website, you may not have a choice but to go that route. For internal web or B2B (Business to Business) applications however, that kind of initial performance may not be required. Discuss with your product manager to see if the cost:benefit ratio is worth the trouble.

The API: There’s still no true answer.

It seems everyone lately is asking themselves what to do for API. Everyone is jumping in the RESTful API bandwagon, and SOAP is a memory of the past. There are various specifications such as HATEOAS, JSON API, HAL, GraphQLamong others.

GraphQL gives a large amount of power (and responsibility) to the client, allowing it to make nearly arbitrary queries. Along with Relay, it handles client state and caching for you. Implementing the server side portion of GraphQL is difficult and most of the documentation is for Node though.

Netflix’s Falcor looks like it will eventually give us a lot of what GraphQL/Relay offers, with simpler server requirements. It is however only a developer preview and not ready for prime time.

All the well known specifications have their quirks. Some are overly complex. Some only handle reads and don’t cover update. Some stray significantly from REST. Many people choose to make their own, but then have to solve all the design problems on their own.

I don’t think any solutions out there is a slam dunk, but here’s what I think your API should have:

  • It should be predictable. Your endpoints should follow consistent conventions.
  • It should allow fetching multiple entities in one round trip: needing 15 queries to fetch everything you need on page load will give poor performance.
  • Make sure you have a good update story: many specifications only covers reads, and you’ll need to update stuff sometimes.
  • It should be easy to debug: looking at the Chrome inspector’s network tab should easily let me see what happened.
  • It should be easy to consume: I should be able to easily consume it with fetch, or have a well supported client library (like Relay)

I haven’t found a solution that covers all of the above. If there is one, let me know.

Consider looking at Swagger to document your API if you go the standard RESTful path.

Desktop applications: Electron.

Electron is the foundation of the great Atom editor and can be used to make your own applications. At it’s core, it is a version of Node that can open Chrome windows to render a GUI, and has access to the operating system’s native APIs without a browser’s typical security sandboxing. You’ll be able to package your application and distribute it like any other desktop application, complete with an installer and auto-updates.

This is one of the easiest ways to make an application that can run on OSX, Windows and Linux while reusing all the tools listed above. It is well documented and has a very active community.

You may have heard of nw.js (formerly node-webkit) which has existed longer (and does almost the same thing), but Electron is now more stable and is easier to use.

Take a look at this great boilerplate to play around with Electron, React and hot reload. You’ll probably want to start from scratch if you’re serious about making your own application so you understand how all the pieces work.

Who to follow and where to learn more?

This is a place where I’m falling short, but on Twitter I follow the following people:

While there’s many more worth noting, those people retweet almost anything worth looking at, so they’re a good start.

Consider reading Pete Hunt’s Learning React. Follow the order!

Dan Abramov published the Getting started with Redux video series. I can’t overstate how amazing it is at teaching Redux.

Dan also published his own list, and it’s probably better than mine.

Mark Erikson’s collection of React/Redux links is an ever growing gold mine.

Read Removing user interface complexity, or why React is awesome to get a walkthrough of how React is designed and why.

If you don’t need it, don’t use it

The JavaScript ecosystem is thriving and moving quickly, but there’s finally an end at the light of the tunnel. Best practices are no longer changing constantly, and it is becoming increasingly clear which tools are worth learning.

The most important thing to remember is to keep it simple and only use what you need.

Is your application only 2–3 screens? Then you don’t need a router. Are you making a single page? Then you don’t even need Redux, just use React’s own state. Are you making a simple CRUD application? You don’t need Relay. Are you learning ES6? You don’t need Async/Await or Decorators. Are you just starting to learn React? You don’t need Hot reload and server rendering. Are you starting out with Webpack? You don’t need code splitting and multiple chunks. Are you starting with Redux? You don’t need Redux-Form or Redux-Sagas.

Keep it simple, one thing at a time, and you’ll wonder why people ever complained about JavaScript fatigue.

Did I miss anything?

And there you have it, my view of the current state of JavaScript. Do you think I forgot an important category? Do you think I’m objectively wrong on one of these choices? Do you recommend something else? Let me know!


浅谈浏览器端JavaScript跨域解决方法

$
0
0

由于安全的原因,浏览器做了很多方面的工作,由此也就引入了一系列的跨域问题,需要注意的是:

跨域并非浏览器限制了发起跨站请求,而是跨站请求可以正常发起,但是返回结果被浏览器拦截了。最好的例子是 crsf 跨站攻击原理,请求是发送到了后端服务器无论是否跨域!注意:有些浏览器不允许从HTTPS的域跨域访问HTTP,比如Chrome和Firefox,这些浏览器在请求还未发出的时候就会拦截请求,这是一个特例

1. JSONP

JSONP的全称是 “JSON With Padding”, 词面意思上理解就是 “填充式的JSON”。它不是一个新鲜的东西,隶属于 JSON 的一种使用方法,或者说是一种使用模式,可以解决一些常见的浏览器端网页跨域问题。

正如他的名称一样,它是指被包含在调用函数中的JSON,比如这样:

callback({"Name": "小明", "Id" : 1823, "Rank": 7})

由于 jQuery 的一些原因,使得 JSONP 常常与 Ajax 混淆。实际上,他们没有任何关系。

由于浏览器的同源策略,使得在网页端出现了这个“跨域”的问题,然而我们发现,所有的 src 属性并没有受到相关的限制,比如 img / script 等。

JSONP 的原理就要从 script 说起。script 可以执行其他域的js 函数,比如这样:

a.html
...
<script>
  function callback(data) {
    console.log(data.url)
  }
</script>

<script src='b.js'></script>
...


b.js
callback({url: 'http://www.rccoder.net'})

显然,上面的代码是可以执行的,并且可以在console里面输出http://www.rccoder.net

利用这一点,假如b.js里面的内容不是固定的,而是根据一些东西自动生成的, 嗯,这就是JSONP的主要原理了。回调函数+数据就是 JSON With Padding 了,回调函数用来响应应该在页面中调用的函数,数据则用来传入要执行的回调函数。

至于这个数据是怎么产生的,说粗鲁点无非就是字符串拼接了。

简单总结一下: Ajax 是利用 XMLHTTPRequest 来请求数据的,而它是不能请求不同域上的数据的。但是,在页面上引用不同域的 js 文件却是没有任何问题的,这样,利用异步的加载,请求一个 js 文件,而这个文件的内容是动态生成的(后台语言字符串拼接出来的),里面包含的是 JSON With Padding(回调函数+数据),之前写的那个函数就因为新加载进来的这段动态生成的 js 而执行,也就是获取到了他要获取的数据。

重复一下,在一个页面中,a.html这样写,得到 UserId 为 1823 的信息:

a.html

...
src="http://server2.example.com/RetrieveUser?UserId=1823&callback=parseResponse">
...

请求这个地址会得到一个可以执行的 JavaScript。比如会得到:

parseResponse({"Name": "小明", "Id" : 1823, "Rank": 7})

这样,a.html里面的 parseResponse() 这个函数就能执行并且得到数据了。

等等,jQuery到底做了什么:

jQuery 让 JSONP 的使用API和Ajax的一模一样:

$.ajax({
  method: 'jsonp',
  url: 'http://server2.example.com/RetrieveUser?UserId=1823',
  success: function(data) {
    console.log(data)
  }
})

之所以可以这样是因为 jQuery 在背后倾注了心血,它会在执行的时候生成函数替换callback=dosomthing ,然后获取到数据之后销毁掉这个函数,起到一个临时的代理器作用,这样就拿到了数据。

JSONP 的后话

JSONP的这种实现方式不受同源策略的影响,兼容性也很好;但是它之支持 GET 方式的清楚,只支持 HTTP 请求这种特殊的情况,对于两个不同域之间两个页面的互相调用也是无能为力。

2. CORS

XMLHttpRequest 的同源策略看起来是如此的变态,即使是同一个公司的产品,也不可能完全在同一个域上面。还好,网络设计者在设计的时候考略到了这一点,可以在服务器端进行一些定义,允许部分网络访问。

CORS 的全称是 Cross-Origin Resource Sharing,即跨域资源共享。他的原理就是使用自定义的 HTTP 头部,让服务器与浏览器进行沟通,主要是通过设置响应头的 Access-Control-Allow-Origin 来达到目的的。这样,XMLHttpRequest 就能跨域了。

值得注意的是,正常情况下的 XMLHttpRequest 是只发送一次请求的,但是跨域问题下很可能是会发送两次的请求(预发送)。

更加详细的内容可以参见:

https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS

CORS 的后话:

相比之下,CORS 就支持所有类型的 HTTP 请求了,但是在兼容上面,往往一些老的浏览器并不支持 CORS。

Desktop:

浏览器 版本
Chrome 4
Firefox (Gecko) 3.5
Internet Explorer 8 (via XDomainReques) 10
Opera 12
Safari 4

Mobile:

设备 版本
Android 2.1
Chrome for Android yes
Firefox Mobile (Gecko) yes
IE Mobile ?
Opera Mobile 12
Safari Mobile 3.2

3. window.name

window.name 在一个窗口(标签)的生命周期之内是共享的,利用这点就可以传输一些数据。

除此之外,结合 iframe 还能实现更加强大的功能:

需要3个文件: a/proxy/b

a.html

<script type="text/javascript">
    var state = 0,
    iframe = document.createElement('iframe'),
    loadfn = function() {
        if (state === 1) {
            var data = iframe.contentWindow.name;    // 读取数据
            alert(data);    //弹出'I was there!'
        } else if (state === 0) {
            state = 1;
            iframe.contentWindow.location = "http://a.com/proxy.html";    // 设置的代理文件
        }
    };
    iframe.src = 'http://b.com/b.html';
    if (iframe.attachEvent) {
        iframe.attachEvent('onload', loadfn);
    } else {
        iframe.onload  = loadfn;
    }
    document.body.appendChild(iframe);
</script>
b.html

<script type="text/javascript">
    window.name = 'I was there!';    // 这里是要传输的数据,大小一般为2M,IE和firefox下可以大至32M左右
                                     // 数据格式可以自定义,如json、字符串
</script>

proxy 是一个代理文件,空的就可以,需要和 a 在同一域下

4. document.domain

在不同的子域 + iframe交互的时候,获取到另外一个 iframe 的 window对象是没有问题的,但是获取到的这个window的方法和属性大多数都是不能使用的。

这种现象可以借助document.domain 来解决。

example.com

<iframe id='i' src="1.example.com" onload="do()"></iframe>
<script>
  document.domain = 'example.com';
  document.getElementById("i").contentWindow;
</script>
1.example.com

<script>
  document.domain = 'example.com';
</script>

这样,就可以解决问题了。值得注意的是:document.domain 的设置是有限制的,只能设置为页面本身或者更高一级的域名。

document.domain的后话:

利用这种方法是极其方便的,但是如果一个网站被攻击之后另外一个网站很可能会引起安全漏洞。

5.location.hash

这种方法可以把数据的变化显示在 url 的 hash 里面。但是由于 chrome 和 IE 不允许修改parent.location.hash 的值,所以需要再加一层。

a.html 和 b.html 进行数据交换。

a.html

function startRequest(){
    var ifr = document.createElement('iframe');
    ifr.style.display = 'none';
    ifr.src = 'http://2.com/b.html#paramdo';
    document.body.appendChild(ifr);
}

function checkHash() {
    try {
        var data = location.hash ? location.hash.substring(1) : '';
        if (console.log) {
            console.log('Now the data is '+data);
        }
    } catch(e) {};
}
setInterval(checkHash, 2000);
b.html

//模拟一个简单的参数处理操作
switch(location.hash){
    case '#paramdo':
        callBack();
        break;
    case '#paramset':
        //do something……
        break;
}

function callBack(){
    try {
        parent.location.hash = 'somedata';
    } catch (e) {
        // ie、chrome的安全机制无法修改parent.location.hash,
        // 所以要利用一个中间域下的代理iframe
        var ifrproxy = document.createElement('iframe');
        ifrproxy.style.display = 'none';
        ifrproxy.src = 'http://3.com/c.html#somedata';    // 注意该文件在"a.com"域下
        document.body.appendChild(ifrproxy);
    }
}
c.html

//因为parent.parent和自身属于同一个域,所以可以改变其location.hash的值
parent.parent.location.hash = self.location.hash.substring(1);

这样,利用中间的 c 层就可以用 hash 达到 a 与 b 的交互了。

6.window.postMessage()

这个方法是 HTML5 的一个新特性,可以用来向其他所有的window对象发送消息。需要注意的是我们必须要保证所有的脚本执行完才发送MessageEvent,如果在函数执行的过程中调用了他,就会让后面的函数超时无法执行。

https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage


掘金翻译计划的第一批译文已经完结,移动端、前端干货都有啦

$
0
0

Glow · 11 小时 11 分钟前 · 247 次点击

年前我们掘金在 Github 折腾了一个掘金翻译计划的活动( http://github.com/xitu/gold-miner ),挑选了三十多篇优质的英文技术文章来给大家翻译,下面就是翻译好的文章列表。与此同时,我们从 Github 商店海淘的 Gitcat 作为奖品也送到了参与活动的译者手上了,感谢译者们的精彩翻译,同时也欢迎 v 友们参与到掘金翻译计划贡献更多的优质翻译。

Android

iOS

JavaScript

React

教程

Gitcat 展示区:


深入浅出-iOS Reactive Cocoa的常见用法

$
0
0

深入浅出-iOS Reactive Cocoa的常见用法

字数1146 阅读1565 评论3

简介

今天的主角是Reactive Cocoa,聊聊Reactive Cocoa的常见使用:KVO、Target、Delegate、Notification。

Reactive Cocoa 是一个重量级框架,非常的牛,为什么说Reactive Cocoa非常的牛?
我们所熟知的iOS 开发中的事件包括:

  • Target
  • Delegate
  • KVO
  • 通知
  • 时钟
  • 网络异步回调

ReactiveCocoa ,就是用信号接管了iOS 中的所有事件;也就意味着,用一种统一的方式来处理iOS中的所有事件,解决了各种分散的事件处理方式,显然这么一个庞大的框架学习起来也会比较难!而且如果习惯了iOS原生的编程,可能会觉得不习惯!

先看一个图:

ReactiveCocoa特征.png

从这张图中,可以看出利用信号,ReactiveCocoa接管iOS 的所有事件,抛给开发者对事件作出三个相应反应;

可以用一张图来简要说明

next completed error.png

RAC 的特点

  • 通过 block 函数式 + 链式 的编程,可以让所有相关的代码继承在一起!
    函数式 && 链式
  • 使用时需要注意循环引用,@weakify(self) / @strongify(self) 组合解除循环引用;

下面用iOS开发中常见的五种事件来说明ReactiveCocoa的常见用法!

下载框架:

  • 新建iOS工程
  • 进入终端,建立 Podfile,并且输入以下内容
    # Uncomment this line to define a global platform for your project
    platform :ios, '8.0'
    # Uncomment this line if you're using Swift
    use_frameworks!
    pod 'ReactiveCocoa', '~> 4.0.4-alpha-4'
  • 版本说明:
     2.5 纯 OC
     3.0 正式版支持 Swift 1.2
     4.0 测试版支持 Swift 2.0

在终端输入以下命令安装框架
$ pod install

KVO 监听

程序实现: 监控Person name的属性变化;在touchesBegan中改变name的值,并将变化体现在UILabel上,实现KVO的监控功能;

  • 注意,RAC 的信号一旦注册不会主动释放
  • 只要在 block 中包含有 self. 一定会出现强引用* 需要使用 @weakify(self) / @strongify(self) 组合使用解除强引用
#import <Foundation/Foundation.h>

@interface Person : NSObject
@property (nonatomic, strong) NSString *name;
@end

// ViewController.m
#import "ViewController.h"
@import ReactiveCocoa;
#import "Person.h"

@interface ViewController ()

@property (weak, nonatomic) IBOutlet UILabel *nameLabel;
@property (nonatomic, strong) Person *person;

@end
@implementation ViewController
- (Person *)person
{
  if (!_person)
  {
      _person = [[Person alloc] init]; } return _person;
  }
- (void)viewDidLoad
{
   [super viewDidLoad];
   [self demoKvo];
}
- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
   self.person.name = [NSString stringWithFormat:@"zhang %d",arc4random_uniform(100)];
}

/**
 * 1、为了测试此函数,增加了一个Person类 && 一个Label;点击屏幕则会等改Lable的值
 */
#pragma -mark KVO 监听
- (void)demoKvo {

    @weakify(self)
    [RACObserve(self.person, name)
        subscribeNext:^(id x) {
             @strongify(self)
            self.nameLabel.text = x;
        }];
}
@end

文本框输入事件监听

#pragma -mark 文本框输入事件监听
/**
 * 2、为了测试此函数,增加了一个nameText;监听文本框的输入内容,并设置为self.person.name
 */
- (void)demoTextField {

    @weakify(self);
    [[self.nameText rac_textSignal]
     subscribeNext:^(id x) {
         @strongify(self);
         NSLog(@"%@",x);
         self.person.name = x;
     }];
}

文本框组合信号

#pragma -mark 文本信号组合
/**
 * 3、为了验证此函数,增加了一个passwordText和一个Button,监测nameText和passwordText
 * 根据状态是否enabled
 */
- (void)textFileCombination {

    id signals = @[[self.nameText rac_textSignal],[self.passWordText rac_textSignal]];

    @weakify(self);
    [[RACSignal
      combineLatest:signals]
      subscribeNext:^(RACTuple *x) {

          @strongify(self);
          NSString *name = [x first];
          NSString *password = [x second];

          if (name.length > 0 && password.length > 0) {

              self.loginButton.enabled = YES;
              self.person.name = name;
              self.person.password = password;

          } else  {
              self.loginButton.enabled = NO;

          }
      }];

}

按钮监听

#pragma -mark 按钮监听
/**
 * 4、验证此函数:当loginButton可以点击时,点击button输出person的属性,实现监控的效果
 */
- (void)buttonDemo {
    @weakify(self);
    [[self.loginButton rac_signalForControlEvents:UIControlEventTouchUpInside]
       subscribeNext:^(id x) {
           @strongify(self);
           NSLog(@"person.name:  %@    person.password:  %@",self.person.name,self.person.password);
       }
     ];
}

代理方法

#pragma -mark 代理方法
/**
 * 5、验证此函:nameText的输入字符时,输入回撤或者点击键盘的回车键使passWordText变为第一响应者(即输入光标移动到passWordText处)
 */
- (void)delegateDemo {

    @weakify(self)
    // 1. 定义代理
    self.proxy = [[RACDelegateProxy alloc] initWithProtocol:@protocol(UITextFieldDelegate)];
    // 2. 代理去注册文本框的监听方法
    [[self.proxy rac_signalForSelector:@selector(textFieldShouldReturn:)]
     subscribeNext:^(id x) {
         @strongify(self)
         if (self.nameText.hasText) {
             [self.passWordText becomeFirstResponder];
         }
     }];
    self.nameText.delegate = (id<UITextFieldDelegate>)self.proxy;
}

通知

#pragma -mark 通知
/**
 * 验证此函数:点击textFile时,系统键盘会发送通知,打印出通知的内容
 */
- (void)notificationDemo {
    [[[NSNotificationCenter defaultCenter] rac_addObserverForName:UIKeyboardWillChangeFrameNotification object:nil]
        subscribeNext:^(id x) {
            NSLog(@"notificationDemo : %@", x);
        }
     ];
}

看来看看这张图,会不会清晰很多?

ReactiveCocoa特征.png

这么一个强大的RAC,来试试吧?
以上源码在gitHub:Demo链接


参考资料1
参考资料2
参考资料3
参考资料4



深入理解React、Redux

$
0
0

深入理解React、Redux

字数2764 阅读6402 评论5

React+Redux非常精炼,良好运用将发挥出极强劲的生产力。但最大的挑战来自于函数式编程(FP)范式。在工程化过程中,架构(顶层)设计将是一个巨大的挑战。要不然做出来的东西可能是一团乱麻。说到底,传统框架与react+redux就是OO与FP编程范式的对决。

简单学习某项技术并不能让建立起一个全局理解,也很难工程化。所以,我们必须要看以下几方面:

  1. 了解其独特的东西。如React中组件是pure render函数。
  2. 置于上下文中。将React放在flux、redux中。才能真正看到数据单向流动。
  3. 对比看到优势。比其它的解决方案(vue, angluar,adobe flex),看其优势。
  4. 挑战。软件领域里没有银弹,有好处一定有挑战。

1. 谈谈对React的理解

1.1 独特性

  • Virtual Dom:又一个虚拟层、中间层、cache层,大胆的跳出了web的DOM实现约束,实现更加理想的UI编程模型:树型控件组织、统一事件、统一数据传输通道(props)。并这个模型render到web,navtive的GUI系统上,这非常美妙,Learn once, Use Everywhere。
  • 轻Component:React的component强调pure render,并把用户交互行为大量delegate给其它子系统中action。
  • 组件状态(state)要是最小化的,按官方说法,不是props传入的,随时间变化的,不能是通过其它的props和stat计算而来。
  • 常用的设计模式是创建多个只负责渲染数据的无状态(stateless)组件,在它们的上层创建一个有状态(stateful)组件并把它的状态通过 props 传给子级。有状态组件封装了所有用户的交互逻辑,而这些无状态组件则负责声明式地渲染数据。
  • 设计范式是数据驱动组件来render。对比OO范式,OO有时会把compoent搞成自闭的、复杂的类,里面藏了态多的状态,进行很难跟踪。

1.2 置React于上下文中

React的virtual DOM想要构建一个理想的UI模型,那理想的UI编程到底是如何呢?根据我的总结是以下4点:

  1. 构建基本UI能力:不论是组件拖拉(VB),还是写树型XML描述文档,能快速做出和直觉一致的UI界面
  2. 数据绑定:与数据层(model)绑定数据,展示一个有数据的、活的UI
  3. 用户交互:用户点击、触摸,程序完成业务逻辑,并反馈给用户
  4. UI布局:前端的表现是一个UI调起另一个UI,自己死掉或藏起来。

显然,React完成第一点不错的,通过写JSX能完成一个基本UI的构建。数据绑定也做了一大半了,性能上由diff dom来保证。JSX是一个非常好的工程化手段,你第一眼看到它,可难会不爽。但新事物总要给它五分钟!品味一下,你会发现这样做更内聚,更务实的把计算与显示内聚在一个地方,甚至你还可以把css样式以inline sytle的方式写在js文件里。

在数据绑定上,React通过父子组件的props数据通讯协议,可以漂亮的完成初始数据绑定。这个props做得简单而高效,亮点。

用户交互,React组件被认为自己是一个有限状态机。与用户交互,改变自己的状态(state)。算法根据这些状态,render算法现计算出合适的数据集呈现给用户。

1.3 对比其它方案

React对比其它方案,突出优点是正交化、轻、约定性、务实:

  1. 对比Vue, Angular更正交化,它没有template,大的component就是template。
  2. 对比abobe flex或其它的UI系统是轻,没有复杂的UI OO体系,这也是说明React只是一个目标UI体系的粘合层。
  3. 约定大于Java仪式感,这也是自Rails之后在风潮。如pure render,最小化无状态state。
  4. 务实,通过JSX来组织表达控件的树形结构,用this.props来传递数据

1. 4 挑战

React是数据驱动式的UI component体系,是一个开放的数据依赖式,非自闭OO对象。它会有以下挑战

  1. render方法可能很大,component显示逻辑是一次性在render里实现的。这里往往有两部分逻辑实现,初次数据绑定,与用户交互之后的算法。
  2. render出一个ReactElement树,但这个树中的一些组件、UI元素因为计算被前移了,这会导致这个树看起来不太完整,进而不太直观。
  3. 虽然可以分解成小的component,构建大的Component可能是个挑战,因为所有逻辑都依赖于外部数据,而外部数据相对不稳定,组件失去了自我边界的保护,非自闭。
  4. 当交互复杂,组件的state也会越来越大,render全局算法会越来越难写。
  5. 把子组件的行为传上来也是一件不显化的事,往往需要把父组件的一个函数作为回调传给子组件。
  6. 大组件往往有多个Page,这几个Page如何交换数据是个很大的挑战

2. 谈谈对Redux的理解

2. 1 独特性

  1. 一个数据层的framework,类似于Baobab。比其好的一点,引入middleware体系,有几个现成的插件redux-thunk, redux-promise,不用但心异步请求的事。虽然说灵活、独立很重要,但全局设计也是 让人放心去用,而不用担心功能缺失和其它风险。
  2. 应用了FP中数据不可变性(immutable),这让追踪数据改变过程有很大提升。也就是其宣扬的时间旅行。这对复杂问题定位是有好处的。
  3. 树形化数据存储,reducer的返回即是其新建,更新、删除过程,树形结构不需要预先定义。同时,reducer也是纯函数,与reactor的render是纯函数呼应。
  4. 强约束(约定),增加了内聚合性。Flux中的action, dispatcher, store比较散,在分层架构是需要的,但内聚性不佳,出现java的仪式感。而redux是数据层很清晰,一个store,更新则dispatch到action,前半段自己想怎么搞就怎么搞(middleware),后半段reducer。reducer约束是不要改oldState,返回newStatew,做到immutable。
  5. 不一样的action:Redux中的action会切得很细,一个传统的Action被切成了三个Action:Loading, GetSuccess, GetError。所以,从这个方面来看Action服务于UI,而非业务逻辑单元。
  6. Redux大量应用FP,经常遇到FP中的curry, trund, promise这些概念,学习成本较高。在middleware层实现,对没有FP经验的人讲不友好。

2.2 置Redux于上下文中

  1. Redux是一个比较薄的数据层。同时,把View同步刷新也做了(redux-react)。
  2. 在传统MVC中,还是有一个controller来做业务逻辑。但Redux硬生生的把一个controller切成二部分:action, reducer。
  3. 理论上,Redux还可以把React组件中的stat的存储也拿过来,比如用户搜索的名称。这样,就可以把过滤算法放到selector中去。但这样好处并不是很大。

2.3 与其它方案对比

  1. 与Baobab对比,两者都是数据管理框架,Baobab提供cursor来方便你对很深层的数据结构进行update。而redux是通过selector函数来做,这种方法会比较晦涩。但比Baobab好的地方,做数据fetch可以通过Redux的middleware来完成。
  2. 与Rails的controller, ActionRecord相比,Redux更多是一种约定,不提供路由级的controller,不提供数据访问cursor。
  3. 接口不超过10个,代码也非常少,但是与之前的MVC框架完全不同。可能最大的问题是没有和react-route打通,在工程化时让人迷茫。​

2.4 挑战

  1. Redux应用最大的挑战更多来自设计层面,如何设计action,设计state树形结构。我们只能通过非常少的线索(FP架构思想)去做,这对没有FP经验的团队是一个大挑战。
  2. 通过selector函数从stat树里取数据比较晦涩,并且这个selector里的代码认为是业务逻辑,单独放在selector,业务上不内聚。
  3. middleware层设计:action是一个意图(intent),发送给middleware,让其来实现此意图。但这样做,action比有两义性,一会儿是对象,一会儿是函数。同时FP编程侵入性太大。
  4. 没有与Route结合起来设计,让人很不放心,也不知道如何在不同路由下来做数据与组件的connect。

3. 总结

react + redux是一种典型的FP编程范式实现,而其它框架大多是OO范式。是否选用react+redux开发,需要看是否对FP有掌握或者有一定的架构能力。但单独用react则没有这种要求,当个view来用。

3.1 FP vs OO

FP优缺点
  1. FP的好处是没有OO的复杂仪式感,是沿着数据结构+算法的思路进行抽象和结构化。如果顶层设计做好,代码复用度极高,代码量少。比如要生成一颗树我用迭归算法直接生成,而OO的人往往会用一个Composite模式去定义一个晦涩的类接口。
  2. FP的缺点也是也是面向过程编程的缺点,算法与数据全局化、并且互相耦合,这往往会导致一个强耦合的系统。如果没有做好顶层设计,是难以演进的。
  3. 通过约定和全局的理解,可以减少FP的一些缺点。“约定大于配置”也是框架的主要发展方向。​
OO优缺点
  1. OO的好处是分而治之,增量演进。同时有自闭性,语义比较清晰。
  2. 缺点是在表达一些算法时,反而是很困难的。如command模式实现历史回滚就挺麻烦。也这是四人帮的设计模式大多比较难以理解的原因。另外,OO一直有一个对算法复用的问题,ruby语言解决比较好,用mixin很自然。而像C++就用多继承和泛型,个人感觉并不是最好的。​

3.2 建议

  1. 有FP经验的或者架构能力比较强,团队人员比较少、能力强,较强适合用react+redux。不然用react+angluar, 或直接用vue。
  2. 过度的OO,搞太多java仪式感确实没有必要。通过架构设计,FP在生产力有着一定的优势。同时对付复杂系统,能更好调测、定位问题。在新时代下,值得尝试。

部署React+Redux Web App

$
0
0

部署React+Redux Web App

March 09, 2016

前段时间使用React+Redux做了个后台管理的项目,在React初体验中分享了下入门经验。这篇文章谈谈我的部署实践。

目标

怎样才是好的部署呢?我觉至少有以下2点:

  • 性能优化:包括代码执行速度、页面载入时间
  • 自动化:重复的事情尽量让机器完成,最好能运行一条命令就完成部署

代码层面

首先从代码层面来分析。

使用React+Redux,往往会用到其强大的调试工具Redux DevTools。在手动配置DevTools时需要围绕Store、Component进行一些配置,然而这些都是用来方便调试的,生产环境下我们不希望加入这些东西,所以建议就是从代码上隔离development和production环境:

containers/
    Root.js
    Root.dev.js
    Root.prod.js
    ...
store/
    index.js
    store.dev.js
    store.prod.js

同时采用单独的入口文件(比如上面的containers/Root.js)按需加载不同环境的代码:

if (process.env.NODE_ENV === 'production') {
    module.exports = require('./Root.prod');
} else {
    module.exports = require('./Root.dev');
}

有一个细节需要注意:ES6语法不支持在if中写import语句,所以这里采用了CommonJS的模块引入方法require

具体可以看看Redux的Real World示例项目。

代码层面还需要注意的一点就是按需import,否则可能会在打包时生成不必要的代码。

OK,我们现在用webpack打个包,webpack --config webpack.config.prod.js --progress,结果可能会让你下一跳:8.4 M!求心理阴影面积…

使用webpack打包

接下来我们来调教下打包工具。目前React主流打包工具有2种:webpackBrowserify。Browserify没用过,这里主要谈谈webpack的配置经验。

同上,建议为不同的环境准备不同的webpack配置文件,比如:webpack.config.dev.jswebpack.config.prod.js。下面我们来看看几个比较关键的配置选项:

devtools

文档在这里,我对source map技术不太了解,所以几个选项真不知道是干什么的。不过好在下面的表格中有写哪些是production supported,随便选择一个就好,感觉结果区别不大。这里我选择了source-map,webpack一下后生成了2个包:

  • bundle.js:3.32 MB
  • bundle.js.map:3.78 MB

唔,这样好多了,把用于定位源码的source map分离出去了,一下子减少了一半以上的体积。(注:source map只会在浏览器devtools激活时加载,并不会影响正常的页面加载速度,具体可参考When is jQuery source map loaded?JavaScript Source Map 详解。)

plugins

webpack文档中有一节Optimization,讲到了一些优化技巧。Chunks略高级没用过,看前面两个吧。提到了3个插件:UglifyJsPlugin、OccurenceOrderPlugin、DedupePlugin,第一个插件应该都懂是干啥,后面两个描述得挺高深的,不过不懂没关系,全用上试试,反正没副作用:

plugins: [
    new webpack.optimize.UglifyJsPlugin({
        compress: {
            warnings: false
        }
    }),
    new webpack.optimize.DedupePlugin(),
    new webpack.optimize.OccurenceOrderPlugin()
]

打包结果:1.04 MB。

不要忽视NODE_ENV

NODE_ENV其实就是一个环境变量,在Node中可以通过process.env.NODE_ENV获取。目前大家往往用这个环境变量来标识当前到底是development还是production环境。

React提供了2个版本的代码(见:Development vs. Production Builds),production版性能更好:

We provide two versions of React: an uncompressed version for development and a minified version for production. The development version includes extra warnings about common mistakes, whereas the production version includes extra performance optimizations and strips all error messages.

同时在React文档中明确建议在生产环境下设置NODE_ENVproduction(见:npm):

Note: by default, React will be in development mode. To use React in production mode, set the environment variable NODE_ENV to production (using envify or webpack’s DefinePlugin). A minifier that performs dead-code elimination such as UglifyJS is recommended to completely remove the extra code present in development mode.

可以通过webpack的DefinePlugin设置环境变量,如下:

plugins: [
    ...
    new webpack.DefinePlugin({
        'process.env.NODE_ENV': JSON.stringify('production')
    }),
]

打包结果:844 KB。

虽然比之前的1 M减少得不多,不过可以提升React的运行性能,还是很值的。

OK,webpack到此为止,给出完整的webpack.config.prod.js

var path = require('path');
var webpack = require('webpack');

module.exports = {
    devtool: 'source-map',
    entry: [
        './index.js'
    ],
    output: {
        path: path.join(__dirname, 'webpack-output'),
        filename: 'bundle.js',
        publicPath: '/webpack-output/'
    },
    plugins: [
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            }
        }),
        new webpack.optimize.DedupePlugin(),
        new webpack.optimize.OccurenceOrderPlugin(),
        new webpack.DefinePlugin({
            'process.env.NODE_ENV': JSON.stringify('production')
        }),
    ],
    module: {
        loaders: [
            {
                test: /.js$/,
                loader: 'babel',
                exclude: /node_modules/,
                include: __dirname
            },
            {
                test: /\.css$/,
                loaders: ["style", "css"]
            },
            {
                test: /\.scss$/,
                loaders: ["style", "css", "sass"]
            }
        ]
    },
};

打包结果输出到webpack-output文件夹下。

使用FIS3添加hash

前端公认的Best Practice就是给资源打上hash标签,这对缓存静态资源很有用。webpack文档中有一节Long-term Caching就是专门讲这个的,然而配置起来好麻烦的样子,最后我还是选择了百度的FIS3

使用方法见文档,写得很详细。贴一下我的fis-conf.js

// 需要打包的文件
fis.set('project.files', ['index.html', 'static/**', 'webpack-output/**']);

// 压缩CSS
fis.match('*.css', {
    optimizer: fis.plugin('clean-css')
});

// 压缩PNG图片
fis.match('*.png', {
    optimizer: fis.plugin('png-compressor')
});

fis.match('*.{js,css,png}', {
    useHash: true,  // 启用hash
    domain: 'http://7xrdyx.com1.z0.glb.clouddn.com',    // 添加CDN前缀
});

其中,通过useHash: true启用了hash功能,同时压缩了CSS、PNG图片,然后通过domain添加了CDN前缀。

运行fis3 release -d ./output后,就把所有的文件打包到output文件夹下了,截个图:

使用CDN

844 KB虽然比最开始的8.4 M缩小到了1/10,但其实也有点大。包大小基本上已经压缩到极限了,但我们还可以通过CDN来加快页面加载时间。

我选择的是七牛,效果不错,而且免费额度够用。

上一步中我们已经用FIS3添加了七牛CDN的前缀,接下来就是上传打包文件了。手动上传太麻烦,七牛提供了一个用来批上传的命令行工具qrsync,具体用法见文档。

使用Fabric进行远程部署

部署的时候难免会涉及到登陆server执行部署命令,你可以手动操作,但我还是推荐用一些工具来做,方便自动化。这类工具不少,选择顺手的就行,我因为之前有过Python开发经验,所以一直用Fabric,很好用。安装下Python,然后安装包管理工具pip,然后sudo pip install fabric就行了。

在项目根目录下创建fabfile.py,通过Python代码描述远程部署过程:

# coding: utf-8
from fabric.api import run, env, cd

def deploy():
    env.host_string = "username@ip"
    with cd('/path/to/your/project'):
        run('git pull')
        run('npm install')
        run('webpack --progress --config webpack.config.prod.js')
        run('fis3 release -d ./output')
        run('qrsync qrsync.conf.json')

其中,env.host_string描述server信息,然后cd到项目文件夹,git pull从Git仓库拉取源码,npm install安装第三方库,接下来就是各种打包,最后批量上传到CDN。

本地执行fab deploy,就可以部署到生产服务器了。

Nginx

收尾工作交给Nginx:

  • 域名与本地文件夹路径关联起来
  • gzip支持:这个一定要做,效果很赞,具体启用方法就是将/etc/nginx/nginx.conf与gzip相关的东西uncomment一下就行
  • 不存在的path一律导向/index.html:否则在非根路径下刷新浏览器,就会出现404,开发React的童鞋应该都懂这个坑…

我的nginx.conf如下所示:

server {
    listen 80;
    server_name yourdomain.com;
    root /path/to/your/project;

    location / {
        try_files $uri /index.html;
    }
}

注:有童鞋可能奇怪为什么没有添加cache的配置,因为所有东西都上传到CDN了…

浏览器实际加载效果

在Chrome调试工具下看。

禁止缓存:

可以看到bundle的最终大小为206 KB,加载时间是118 ms。

启用缓存:

效果还不错。

开发->部署流程

从开发到部署的流程如下:

  • 写代码、本地调试
  • 代码提交到远程Git仓库
  • 部署:fab deploy

附:使用npm scripts

最近npm scripts有点火,很多人都用它来取代Grunt、Gulp做自动化构建。

我们将部署命令放到package.jsonscripts中,然后通过npm run <script-name>的方式调用不同的script,这样会更加的cleaner:

{
    "name": "your-project-name",
    "version": "1.0.0",
    "description": "",
    "main": "index.js",
    "scripts": {
        "start": "node server.js",
        "build": "webpack --progress --config webpack.config.prod.js && fis3 release -d ./output",
        "upload": "qrsync qrsync.conf.json",
        "deploy": "fab deploy"
    },
    ...
}

然后fabfile.py可以改写为:

# coding: utf-8
from fabric.api import run, env, cd

def deploy():
    env.host_string = "user@ip"
    with cd('/path/to/your/project'):
        run('git pull')
        run('npm install')
        run('npm run build')
        run('npm run upload')

部署命令变成:npm run deploy,更加赏心悦目。


Using geo-based data with SequelizeJS utilizing PostgreSQL and MS SQL Server in Node.js

$
0
0

Using geo-based data with SequelizeJS utilizing PostgreSQL and MS SQL Server in Node.js

I’m currently building an Angular 2 sample application, which will use location-based data. The app uses the browser’s navigator.geolocation feature to obtain the current position and send it to a server which returns a list of chat messages in a given radius around the sent coordinate. As a German student, you may know this from the app Jodel. For sample purposes only, the backend of the app can either use PostgreSQL or Microsoft SQL Server (MSSQL) which will be abstracted with the amazing SequelizeJS library. The app and the backend will later be open-sourced, so you can take a look at it yourself.

I’m pretty sure all the information in this blog post can be found elsewhere (and even in more detail). But it took me quite an amount of time to get it up and running. So I want to give you a condensed overview about it.

The intention of this blog post is to show the creation of a simple backend with the two different database engines. The code shown in this post is also hosted at Github. There is no talk about the Angular 2 frontend in this article, though.

Preparation

While MS SQL Server has a built-in Geographic Information System (GIS), PostgreSQL does not. Fortunately, PostgreSQL has an extension called PostGIS to support spatial- and geo-based data. Since I’m using a Mac for development, installing PostGIS is very easy if you use Postgres.app. It has integrated PostGIS support. If you don’t use the app, you need to refer to the PostGIS documentation for proper installation. After installing PostGIS you need to enable the extension for the database where you want to use it by executing CREATE EXTENSION postgis; against the database. That’s all you need to do.

Schema design

Both PostgreSQL and MSSQL support two different data types for spatial and geo-based data: geometry and geographic. Geometry data will be calculated on a planar plane. Geographic data, however, will be calculated on a sphere, which is defined by a Spatial Reference System Identifier(SRID, more on that below). Take a look at the following two images to see the difference.

Planar Coordinate System

Spherical Coordinate System

As you can see, within the planar coordinate system, a line would be drawn straight from New York to Berlin, resulting in not so accurate calculation results. As we all know the earth is not plane, therefore the spherical coordinate system takes that into account and calculates distances on a sphere, leading to more accurate results. Hopefully you don’t use a planar system to calculate the fuel for your airplane. ;-) In case of pure performance, geometry-based data will be faster, since the calculations are easier.

Some paragraphs above I mentioned a mandatory SRID when doing calculation on a spherical coordinate system. It is used to uniquely identify projected, unprojected or local spatial coordinate system definitions. In easier words, it identifies how your coordinates are mapped to a sphere where they are valid (e.g. whole world, or just a specific country) and which units they produce in case of calculations (kilometers, miles, …). For example, EPSG:4326/WGS84 is used for the worldwide GPS satellite navigation system, while EPSG:4258/ETRS89 can be used for calculations in Europe. It is also possible to convert data from one SRID into another SRID.

Before you start doing your schema or table design, you should consider whether you want to use geometry or geography. As a very simple rule of thumb: If you don’t need to calculate distances across the globe or you have data which represents the earth, just go with geometry. Otherwise take geography into account.

SequelizeJS and GIS

GIS support for SequelizeJS is, on the one hand, supported since 2014’ish. On the other hand, unfortunately, it is only implemented for PostgreSQL and PostGIS. There is a discussion going on for implementing a broader support for GIS. Another drawback is that only geometry is currently supported. If you need geography support, then SequelizeJS today can’t help you since it is not implemented as a data type at all. Nevertheless, for my little sample it is completely OK to go with geometry data, even when doing location-based search since the radius will be small enough to get good results. Actually, we can use SequelizeJS for both PostgreSQL and MSSQL! The next paragraphs explain what you need to do to achieve this.

Prepare SequelizeJS

For the sample backend I’m using Node.js v5.4.0. At the very first, we need to install the necessary dependencies. A simple npm i sequelize pg tedious  is what we need. sequelize  will install SequelizeJS. pg  is the database driver for PostgreSQL and tedious  the one for MSSQL.

Side note: There are official MSSQL drivers from Microsoft (here and here), but they are currently for Windows only.

Create the database connector class

Let’s start by creating a very simple and minimalistic class Database in ECMAScript 2015, which connects to the database and creates a model:

'use strict';

const Sequelize = require('sequelize');

function Database() {
    let sequelize;
    let dialect;
    let models = {};

    this.models = models;

    this.getDialect = function () {
        return dialect;
    };

    this.initialize = function (useMSSQL) {
        sequelize = useMSSQL ? connectToMSSQL() : connectToPostgreSQL();

        dialect = sequelize.connectionManager.dialectName;

        initializeModels();

        return syncDatabase();
    };

    function connectToMSSQL() {
        return new Sequelize('SampleDatabase', 'SampleUser', 'SamplePassword', {
            host: '10.211.55.3',
            dialect: 'mssql',
            dialectOptions: {
                instanceName: 'SQLEXPRESS2014'
            }
        });
    }

    function connectToPostgreSQL() {
        return new Sequelize('SampleDatabase', 'SampleUser', 'SamplePassword', {
            host: 'localhost',
            dialect: 'postgres'
        });
    }

    function initializeModels() {
        const SampleModel = sequelize.define('SampleModel', {
            id: {
                autoIncrement: true,
                type: Sequelize.INTEGER,
                primaryKey: true
            },
            point: {
                type: Sequelize.GEOMETRY('POINT'),
                allowNull: false
            }
        });

        models[SampleModel.name] = SampleModel;
    }

    function syncDatabase() {
        return sequelize.sync();
    }
}

module.exports = new Database();

Let’s dissect this code – first things first: Import Sequelize, so we can use it. Then we define the Database  class with a public field called models  and two public functions calledgetDialect and  initialize . The public field will hold our sample model, so we can use it later. The getDialect  function returns the used dialect either postgres  or mssql . The initialize  function is used to initialize and connect to the database. Within, we check if we want to connect to PostgreSQL or MSSQL. After connecting, we create a SampleModel  with an auto-incrementing primary key id  and a point  of type GEOMETRY(‘POINT’) . SequelizeJS supports different kinds of geometries, but that depends on the underlying database engine. With GEOMETRY(‘POINT’) we tell the database engine, we only want to store geometry of type point. Other valid kinds would be LINESTRING  or POLYGON . Or you can omit the type completely to use different kinds within the same column. At last, we store our model in our public field, so it is accessible via this.models.SampleModel  later on. Last, but not least, we use syncDatabase()  which calls sequelize.sync()  and returns a Promise . sequelize.sync()  will create the necessary tables for your defined models in this case.

*Side note: *All SequelizeJS methods which communicate with the database will return a Promise .

The module get’s exported as an instance/singleton.

Create the SampleService adapter

Next is a service class which will use our database and model to create entities and read data. The service will be a wrapper around the actual implementations for the different database engines and provides access methods which could be used by an user interface or Web API to access the data.

'use strict';

const SampleServiceMSSQL = require('./sampleService.mssql'),
        SampleServicePostgreSQL = require('./sampleService.postgres');

function SampleService(database) {
    const adapter = database.getDialect() === 'mssql'
            ? new SampleServiceMSSQL(database.models.SampleModel)
            : new SampleServicePostgreSQL(database.models.SampleModel);

    this.create = function (latitude, longitude) {
        // Do some input parameter validation

        const point = {
            type: 'Point',
            coordinates: [latitude, longitude]
        };

        return adapter.create(point);
    };

    this.getAround = function (latitude, longitude) {
        // Do some input parameter validation
        return adapter.getAround(latitude, longitude);
    };
}

module.exports = SampleService;

At first, we import two classes: SampleServiceMSSQL  and SampleServicePostgreSQL , since we need different approaches for handling our geometry data. Then we define a SampleService  which has a dependency to the database. Notice at the bottom that we export the class and not an instance. Remember, that database.initialize()  will return a Promise  when everything is set up. So we will construct the service later, when the Promise  has been resolved.

Within the class we check which underlying database engine we have. In case of MSSQL we construct SampleServiceMSSQL otherwise SampleServicePostgreSQL . Both of them get the model as their first argument. Same reason here: That ensures a resolved database.initialize()  Promise.

The class itself defines two methods. The first create()  will create a new entry in the database by the provided latitude  and longitude . To do so, a point  object is created with a property type  of value ‘Point’  and a property coordinates  containing an array with latitude  and longitude . This format is called GeoJSON and can be used throughout SequelizeJS. Then we call the adapter’s create  method.

Exactly the same is done with the second method getAround() . The purpose of this method will be to get all points in a radius around the given latitude  and longitude .

Please note, that this sample lacks any input validation by intention due to this blog posts scope.

Now we have a database and service class which functions as an adapter to the concrete implementations. Let’s build the implementations for PostgreSQL and MSSQL!

Implement the SampleServicePostgreSQL adapter class

We start by building the SampleServicePostgreSQL class:

'use strict';

function SampleServicePostgreSQL(model) {
    this.create = function (point) {
        return model.create({
            point: point
        });
    };

    this.getAround = function (latitude, longitude) {
        const query = `
SELECT
    "id", "createdAt", ST_Distance_Sphere(ST_MakePoint(:latitude, :longitude), "point") AS distance
FROM
    "SampleModels"
WHERE
    ST_Distance_Sphere(ST_MakePoint(:latitude, :longitude), "point") < :maxDistance
`;

        return model.sequelize.query(query, {
            replacements: {
                latitude: parseFloat(latitude),
                longitude: parseFloat(longitude),
                maxDistance: 10 * 1000
            },
            type: model.sequelize.QueryTypes.SELECT
        });
    };
}

module.exports = SampleServicePostgreSQL;

This is our adapter for PostgreSQL. The implementation of the create  method is really straightforward. Every SequelizeJS model contains a method create  which will insert the model data into the underlying database. Due to the support of PostGIS we can simply call model.create(point)  and let SequelizeJS take care of correctly inserting our data.

Let’s take a look at the getAround  method. As mentioned above, SequelizeJS has support for PostGIS. Unfortunately, it is a very basic support. It supports inserting, updating and reading, but no other methods like ST_Distance_Sphere , or ST_MakePoint  via a well-defined API abstraction. But according to this Github issue it is currently being discussed.  By the way, the mentioned methods are open standards from the Open Geospatial Consortium (OGC). We will see those methods later again, when implementing the MS SQL Server adapter.

Back to the getAround  method. First we declare our parameterized query. We select the id , the createdAt  and calculate a distance . OK, wait. What’s happening here? We don’t have a createdAt  property in our model, do we? Well, we have, but not an explicit one. Per default, SequelizeJS automatically creates an additional createdAt  and updatedAt  property for us and keeps track of them. SequelizeJS wouldn’t be SequelizeJS, if you can’t change this behavior.

What about the ST_Distance_Sphere(ST_MakePoint(:latitude, :longitude), “point”) AS distance? We useSTMakePoint to create a point from our latitude  and longitude  parameters. Then we use the result as the first parameter for STDistanceSphere. The second parameter “point”  references our table column. So for every row in our table SampleModels (SequelizeJS automatically pluralizes table names by default) we calculate the spherical distance (although it is a planar geometry object) between the given point and the one in our column. Be careful here and don’t get confused! ST_Distance_Sphere  calculates the distance with a given earth mean radius of 6370986 meters. If you want to use a real Spheroid according to the SRID mentioned above, you need to use STDistanceSpheroid.

The WHERE  part of the query will be used to only select data which is within a provided radius represented by the named parameter maxDistance. Last, but not least, we run this query against our PostgreSQL by calling model.sequelize.query . The first parameter is our query , the second is some options. As you may have noticed, we used named placeholders in our query. Therefore, we use the replacements  objects to tell SequelizeJS the values for placeholders. latitude  and longitude  are self-explanatory. maxDistance  is set to 10 kilometers, so we only get points in the given radius. With the type  property we set the type of the query to a SELECT  statement.

So far, so good, our PostgreSQL adapter is done. Let’s move on to the MSSQL adapter!

Implement the SampleServiceMSSQL adapter class

The code for the SampleServiceMSSQL  class is the following:

'use strict';

function SampleServiceMSSQL(model) {
    this.create = function (point) {
        const query = `
INSERT INTO [SampleModels]
    (
        [point],
        [createdAt],
        [updatedAt]
    )
VALUES
    (
        geometry::Point(${point.coordinates[0]}, ${point.coordinates[1]}, 0),
        ?,
        ?
    )`;

        return model.sequelize.query(query, {
            replacements: [
                new Date().toISOString(),
                new Date().toISOString()
            ],
            model: model,
            type: model.sequelize.QueryTypes.INSERT
        });
    };

    this.getAround = function (latitude, longitude) {
        const maxDistance = 10 * 1000;
        const earthMeanRadius = 6370986 * Math.PI / 180;

        const query = `
SELECT
    [id], [createdAt], [point].STDistance(geometry::Point(?, ?, 0)) * ? AS distance
FROM
    [SampleModels]
WHERE
    [point].STDistance(geometry::Point(?, ?, 0)) * ? < ?
        `;

        return model.sequelize.query(query, {
            replacements: [
                latitude,
                longitude,
                earthMeanRadius,
                latitude,
                longitude,
                earthMeanRadius,
                maxDistance
            ],
            type: model.sequelize.QueryTypes.SELECT
        });
    };
}

module.exports = SampleServiceMSSQL;

Let’s go through this, step by step. Due to the complete lack of geometry support in MSSQL we need to do everything manually now. Take a look at the create  method. We start with defining our INSERT query , and insert the values: point , createdAt  and updatedAt . If we execute a raw query we need to take care about setting the createdAt  and updatedAt  values. For the value of point  we use geometry::Point(${point.coordinates[0]}, ${point.coordinates[1]}, 0) . If you are not familiar with JavaScript’s templated strings this may hurt your eyes a bit. The syntax ${expression}  simply inserts the value into the string. geometry::Point()  is MSSQL’s equivalent to the ST_MakePoint  mentioned above with one difference. It wants to have a third parameter, the SRID. Since we don’t use it here we simply can use 0.

You may have noticed that we don’t use named parameters here. SequelizeJS automatically recognizes everything that is prefixed with a colon. So it would try to replace :Point  with a named parameter. Fortunately, the replacements objects can be an array as well and replaces all the question marks with the values defined in the order of their appearance. Additionally we supply a property model  with the value of our model . This tells SequelizeJS to automatically map the result of the INSERT  statement to our model. Finally, we set the kind of the query to INSERT .

Now to our last method getAround . It is basically the same as the one from the PostgreSQL adapter, but since we don’t use a SRID for calculation, MS SQL Server will calculate on a plane. Thats why we multiply the result with the earth mean radius to get the distance in meters. Note: This is slightly less accurate than the PostgreSQL version of calculation with ST_Distance_Sphere .

Wow. Take a deep breath, we have finished the database and service classes. The last thing to do is a bit of orchestration to try everything out!

Orchestration

Create a new index.js  file with the following content:

'use strict';

const database = require('./database'),
    Service = require('./sampleService');

let service;

database.initialize(false)
    .then(() => {
        service = new Service(database);

        return service.create(49.019994, 8.413086);
    })
    .then(() => {
        return service.getAround(49.013626, 8.404480);
    })
    .then(result => {
        console.log(result);
    });

Absolutely straight forward. Import the database and the SampleService  class. Then initialize the database with PostgreSQL connection. After initialization, create a new Service  with the database  and insert a coordinate. Then call service.getAround()  with another coordinate and print the result to the console. To run the sample app, open a terminal where you index.js  is located and execute node index.js . You should now see the distance between the Schloss Karlsruhe and the Wildparkstadion which looks like this:

Sample Output

SequelizeJS outputs the executed query per default (with the replaced values, which means you can easily execute the statement manually and take a look at its execution plan for optimizing. How awesome!). If you don’t like it, change it. ;-)

At the bottom of the output, right after the SQL statement, is our result (PostgreSQL):

[
    {
        "id": 3,
        "createdAt": "Fri Jan 08 2016 09:03:07 GMT+0100 (CET)",
        "distance": 1185.92294455
    }
]

The same sample executed with MS SQL Server results in:

[
    {
        "id": 4,
        "createdAt": "Fri Jan 08 2016 09:11:44 GMT+0100 (CET)",
        "distance": 1190.4306593755073
    }
]

As you can see, there is a slight distance difference (approx. 5 meters) which could increase, if the distances get greater. Since the sample app will only make use of data in a 10 km radius, it is completely ok.

If you want to download this sample, head over to Github.


聊一聊前端自动化测试

$
0
0

前言

为何要测试

以前不喜欢写测试,主要是觉得编写和维护测试用例非常的浪费时间。在真正写了一段时间的基础组件和基础工具后,才发现自动化测试有很多好处。测试最重要的自然是提升代码质量。代码有测试用例,虽不能说百分百无bug,但至少说明测试用例覆盖到的场景是没有问题的。有测试用例,发布前跑一下,可以杜绝各种疏忽而引起的功能bug。

自动化测试另外一个重要特点就是快速反馈,反馈越迅速意味着开发效率越高。拿UI组件为例,开发过程都是打开浏览器刷新页面点点点才能确定UI组件工作情况是否符合自己预期。接入自动化测试以后,通过脚本代替这些手动点击,接入代码watch后每次保存文件都能快速得知自己的的改动是否影响功能,节省了很多时间,毕竟机器干事情比人总是要快得多。

有了自动化测试,开发者会更加信任自己的代码。开发者再也不会惧怕将代码交给别人维护,不用担心别的开发者在代码里搞“破坏”。后人接手一段有测试用例的代码,修改起来也会更加从容。测试用例里非常清楚的阐释了开发者和使用者对于这端代码的期望和要求,也非常有利于代码的传承。

考虑投入产出比来做测试

说了这么多测试的好处,并不代表一上来就要写出100%场景覆盖的测试用例。个人一直坚持一个观点:基于投入产出比来做测试。由于维护测试用例也是一大笔开销(毕竟没有多少测试会专门帮前端写业务测试用例,而前端使用的流程自动化工具更是没有测试参与了)。对于像基础组件、基础模型之类的不常变更且复用较多的部分,可以考虑去写测试用例来保证质量。个人比较倾向于先写少量的测试用例覆盖到80%+的场景,保证覆盖主要使用流程。一些极端场景出现的bug可以在迭代中形成测试用例沉淀,场景覆盖也将逐渐趋近100%。但对于迭代较快的业务逻辑以及生存时间不长的活动页面之类的就别花时间写测试用例了,维护测试用例的时间大了去了,成本太高。

Node.js模块的测试

对于Node.js的模块,测试算是比较方便的,毕竟源码和依赖都在本地,看得见摸得着。

测试工具

测试主要使用到的工具是测试框架、断言库以及代码覆盖率工具:

  1. 测试框架:MochaJasmine等等,测试主要提供了清晰简明的语法来描述测试用例,以及对测试用例分组,测试框架会抓取到代码抛出的AssertionError,并增加一大堆附加信息,比如那个用例挂了,为什么挂等等。测试框架通常提供TDD(测试驱动开发)或BDD(行为驱动开发)的测试语法来编写测试用例,关于TDD和BDD的对比可以看一篇比较知名的文章The Difference Between TDD and BDD。不同的测试框架支持不同的测试语法,比如Mocha既支持TDD也支持BDD,而Jasmine只支持BDD。这里后续以Mocha的BDD语法为例
  2. 断言库:Should.jschaiexpect.js等等,断言库提供了很多语义化的方法来对值做各种各样的判断。当然也可以不用断言库,Node.js中也可以直接使用原生assert库。这里后续以Should.js为例
  3. 代码覆盖率:istanbul等等为代码在语法级分支上打点,运行了打点后的代码,根据运行结束后收集到的信息和打点时的信息来统计出当前测试用例的对源码的覆盖情况。

一个煎蛋的栗子

以如下的Node.js项目结构为例

.
├── LICENSE
├── README.md
├── index.js
├── node_modules
├── package.json
└── test
    └── test.js

首先自然是安装工具,这里先装测试框架和断言库:npm install --save-dev mocha should。装完后就可以开始测试之旅了。

比如当前有一段js代码,放在index.js

'use strict';
module.exports = () => 'Hello Tmall';

那么对于这么一个函数,首先需要定一个测试用例,这里很明显,运行函数,得到字符串Hello Tmall就算测试通过。那么就可以按照Mocha的写法来写一个测试用例,因此新建一个测试代码在test/index.js

'use strict';
require('should');
const mylib = require('../index');

describe('My First Test', () => {
  it('should get "Hello Tmall"', () => {
    mylib().should.be.eql('Hello Tmall');
  });
});

测试用例写完了,那么怎么知道测试结果呢?

由于我们之前已经安装了Mocha,可以在node_modules里面找到它,Mocha提供了命令行工具_mocha,可以直接在./node_modules/.bin/_mocha找到它,运行它就可以执行测试了:

Hello Tmall

这样就可以看到测试结果了。同样我们可以故意让测试不通过,修改test.js代码为:

'use strict';
require('should');
const mylib = require('../index');

describe('My First Test', () => {
  it('should get "Hello Taobao"', () => {
    mylib().should.be.eql('Hello Taobao');
  });
});

就可以看到下图了:

Taobao is different with Tmall

Mocha实际上支持很多参数来提供很多灵活的控制,比如使用./node_modules/.bin/_mocha --require should,Mocha在启动测试时就会自己去加载Should.js,这样test/test.js里就不需要手动require('should');了。更多参数配置可以查阅Mocha官方文档

那么这些测试代码分别是啥意思呢?

这里首先引入了断言库Should.js,然后引入了自己的代码,这里it()函数定义了一个测试用例,通过Should.js提供的api,可以非常语义化的描述测试用例。那么describe又是干什么的呢?

describe干的事情就是给测试用例分组。为了尽可能多的覆盖各种情况,测试用例往往会有很多。这时候通过分组就可以比较方便的管理(这里提一句,describe是可以嵌套的,也就是说外层分组了之后,内部还可以分子组)。另外还有一个非常重要的特性,就是每个分组都可以进行预处理(beforebeforeEach)和后处理(after, afterEach)。

如果把index.js源码改为:

'use strict';
module.exports = bu => `Hello ${bu}`;

为了测试不同的bu,测试用例也对应的改为:

'use strict';
require('should');
const mylib = require('../index');
let bu = 'none';

describe('My First Test', () => {
  describe('Welcome to Tmall', () => {
    before(() => bu = 'Tmall');
    after(() => bu = 'none');
    it('should get "Hello Tmall"', () => {
      mylib(bu).should.be.eql('Hello Tmall');
    });
  });
  describe('Welcome to Taobao', () => {
    before(() => bu = 'Taobao');
    after(() => bu = 'none');
    it('should get "Hello Taobao"', () => {
      mylib(bu).should.be.eql('Hello Taobao');
    });
  });
});

同样运行一下./node_modules/.bin/_mocha就可以看到如下图:

all bu welcomes you

这里before会在每个分组的所有测试用例运行前,相对的after则会在所有测试用例运行后执行,如果要以测试用例为粒度,可以使用beforeEachafterEach,这两个钩子则会分别在该分组每个测试用例运行前和运行后执行。由于很多代码都需要模拟环境,可以再这些beforebeforeEach做这些准备工作,然后在afterafterEach里做回收操作。

异步代码的测试

回调

这里很显然代码都是同步的,但很多情况下我们的代码都是异步执行的,那么异步的代码要怎么测试呢?

比如这里index.js的代码变成了一段异步代码:

'use strict';
module.exports = (bu, callback) => process.nextTick(() => callback(`Hello ${bu}`));

由于源代码变成异步,所以测试用例就得做改造:

'use strict';
require('should');
const mylib = require('../index');

describe('My First Test', () => {
  it('Welcome to Tmall', done => {
    mylib('Tmall', rst => {
      rst.should.be.eql('Hello Tmall');
      done();
    });
  });
});

这里传入it的第二个参数的函数新增了一个done参数,当有这个参数时,这个测试用例会被认为是异步测试,只有在done()执行时,才认为测试结束。那如果done()一直没有执行呢?Mocha会触发自己的超时机制,超过一定时间(默认是2s,时长可以通过--timeout参数设置)就会自动终止测试,并以测试失败处理。

当然,beforebeforeEachafterafterEach这些钩子,同样支持异步,使用方式和it一样,在传入的函数第一个参数加上done,然后在执行完成后执行即可。

Promise

平常我们直接写回调会感觉自己很low,也容易出现回调金字塔,我们可以使用Promise来做异步控制,那么对于Promise控制下的异步代码,我们要怎么测试呢?

首先把源码做点改造,返回一个Promise对象:

'use strict';
module.exports = bu => new Promise(resolve => resolve(`Hello ${bu}`));

当然,如果是co党也可以直接使用co包裹:

'use strict';
const co = require('co');
module.exports = co.wrap(function* (bu) {
  return `Hello ${bu}`;
});

对应的修改测试用例如下:

'use strict';
require('should');
const mylib = require('../index');

describe('My First Test', () => {
  it('Welcome to Tmall', () => {
    return mylib('Tmall').should.be.fulfilledWith('Hello Tmall');
  });
});

Should.js在8.x.x版本自带了Promise支持,可以直接使用fullfilled()rejected()fullfilledWith()rejectedWith()等等一系列API测试Promise对象。

注意:使用should测试Promise对象时,请一定要return,一定要return,一定要return,否则断言将无效

异步运行测试

有时候,我们可能并不只是某个测试用例需要异步,而是整个测试过程都需要异步执行。比如测试Gulp插件的一个方案就是,首先运行Gulp任务,完成后测试生成的文件是否和预期的一致。那么如何异步执行整个测试过程呢?

其实Mocha提供了异步启动测试,只需要在启动Mocha的命令后加上--delay参数,Mocha就会以异步方式启动。这种情况下我们需要告诉Mocha什么时候开始跑测试用例,只需要执行run()方法即可。把刚才的test/test.js修改成下面这样:

'use strict';
require('should');
const mylib = require('../index');

setTimeout(() => {
  describe('My First Test', () => {
    it('Welcome to Tmall', () => {
      return mylib('Tmall').should.be.fulfilledWith('Hello Tmall');
    });
  });
  run();
}, 1000);

直接执行./node_modules/.bin/_mocha就会发生下面这样的杯具:

no cases

那么加上--delay试试:

oh my green

熟悉的绿色又回来了!

代码覆盖率

单元测试玩得差不多了,可以开始试试代码覆盖率了。首先需要安装代码覆盖率工具istanbul:npm install --save-dev istanbul,istanbul同样有命令行工具,在./node_modules/.bin/istanbul可以寻觅到它的身影。Node.js端做代码覆盖率测试很简单,只需要用istanbul启动Mocha即可,比如上面那个测试用例,运行./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha -- --delay,可以看到下图:

my first coverage

这就是代码覆盖率结果了,因为index.js中的代码比较简单,所以直接就100%了,那么修改一下源码,加个if吧:

'use strict';
module.exports = bu => new Promise(resolve => {
  if (bu === 'Tmall') return resolve(`Welcome to Tmall`);
  resolve(`Hello ${bu}`);
});

测试用例也跟着变一下:

'use strict';
require('should');
const mylib = require('../index');

setTimeout(() => {
  describe('My First Test', () => {
    it('Welcome to Tmall', () => {
      return mylib('Tmall').should.be.fulfilledWith('Welcome to Tmall');
    });
  });
  run();
}, 1000);

换了姿势,我们再来一次./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha -- --delay,可以得到下图:

coverage again

当使用istanbul运行Mocha时,istanbul命令自己的参数放在--之前,需要传递给Mocha的参数放在--之后

如预期所想,覆盖率不再是100%了,这时候我想看看哪些代码被运行了,哪些没有,怎么办呢?

运行完成后,项目下会多出一个coverage文件夹,这里就是放代码覆盖率结果的地方,它的结构大致如下:

.
├── coverage.json
├── lcov-report
│   ├── base.css
│   ├── index.html
│   ├── prettify.css
│   ├── prettify.js
│   ├── sort-arrow-sprite.png
│   ├── sorter.js
│   └── test
│       ├── index.html
│       └── index.js.html
└── lcov.info
  • coverage.json和lcov.info:测试结果描述的json文件,这个文件可以被一些工具读取,生成可视化的代码覆盖率结果,这个文件后面接入持续集成时还会提到。
  • lcov-report:通过上面两个文件由工具处理后生成的覆盖率结果页面,打开可以非常直观的看到代码的覆盖率

这里open coverage/lcov-report/index.html可以看到文件目录,点击对应的文件进入到文件详情,可以看到index.js的覆盖率如图所示:

coverage report

这里有四个指标,通过这些指标,可以量化代码覆盖情况:

  • statements:可执行语句执行情况
  • branches:分支执行情况,比如if就会产生两个分支,我们只运行了其中的一个
  • Functions:函数执行情况
  • Lines:行执行情况

下面代码部分,没有被执行过得代码会被标红,这些标红的代码往往是bug滋生的土壤,我们要尽可能消除这些红色。为此我们添加一个测试用例:

'use strict';
require('should');
const mylib = require('../index');

setTimeout(() => {
  describe('My First Test', () => {
    it('Welcome to Tmall', () => {
      return mylib('Tmall').should.be.fulfilledWith('Welcome to Tmall');
    });
    it('Hello Taobao', () => {
      return mylib('Taobao').should.be.fulfilledWith('Hello Taobao');
    });
  });
  run();
}, 1000);

再来一次./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha -- --delay,重新打开覆盖率页面,可以看到红色已经消失了,覆盖率100%。目标完成,可以睡个安稳觉了

集成到package.json

好了,一个简单的Node.js测试算是做完了,这些测试任务都可以集中写到package.jsonscripts字段中,比如:

{
  "scripts": {
    "test": "NODE_ENV=test ./node_modules/.bin/_mocha --require should",
    "cov": "NODE_ENV=test ./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha -- --delay"
  },
}

这样直接运行npm run test就可以跑单元测试,运行npm run cov就可以跑代码覆盖率测试了,方便快捷

对多个文件分别做测试

通常我们的项目都会有很多文件,比较推荐的方法是对每个文件单独去做测试。比如代码在./lib/下,那么./lib/文件夹下的每个文件都应该对应一个./test/文件夹下的文件名_spec.js的测试文件

为什么要这样呢?不能直接运行index.js入口文件做测试吗?

直接从入口文件来测其实是黑盒测试,我们并不知道代码内部运行情况,只是看某个特定的输入能否得到期望的输出。这通常可以覆盖到一些主要场景,但是在代码内部的一些边缘场景,就很难直接通过从入口输入特定的数据来解决了。比如代码里需要发送一个请求,入口只是传入一个url,url本身正确与否只是一个方面,当时的网络状况和服务器状况是无法预知的。传入相同的url,可能由于服务器挂了,也可能因为网络抖动,导致请求失败而抛出错误,如果这个错误没有得到处理,很可能导致故障。因此我们需要把黑盒打开,对其中的每个小块做白盒测试。

当然,并不是所有的模块测起来都这么轻松,前端用Node.js常干的事情就是写构建插件和自动化工具,典型的就是Gulp插件和命令行工具,那么这俩种特定的场景要怎么测试呢?

Gulp插件的测试

现在前端构建使用最多的就是Gulp了,它简明的API、流式构建理念、以及在内存中操作的性能,让它备受追捧。虽然现在有像webpack这样的后起之秀,但Gulp依旧凭借着其繁荣的生态圈担当着前端构建的绝对主力。目前天猫前端就是使用Gulp作为代码构建工具。

用了Gulp作为构建工具,也就免不了要开发Gulp插件来满足业务定制化的构建需求,构建过程本质上其实是对源代码进行修改,如果修改过程中出现bug很可能直接导致线上故障。因此针对Gulp插件,尤其是会修改源代码的Gulp插件一定要做仔细的测试来保证质量。

又一个煎蛋的栗子

比如这里有个煎蛋的Gulp插件,功能就是往所有js代码前加一句注释// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com,Gulp插件的代码大概就是这样:

'use strict';

const _ = require('lodash');
const through = require('through2');
const PluginError = require('gulp-util').PluginError;
const DEFAULT_CONFIG = {};

module.exports = config => {
  config = _.defaults(config || {}, DEFAULT_CONFIG);
  return through.obj((file, encoding, callback) => {
    if (file.isStream()) return callback(new PluginError('gulp-welcome-to-tmall', `Stream is not supported`));
    file.contents = new Buffer(`// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com\n${file.contents.toString()}`);
    callback(null, file);
  });
};

对于这么一段代码,怎么做测试呢?

一种方式就是直接伪造一个文件传入,Gulp内部实际上是通过vinyl-fs从操作系统读取文件并做成虚拟文件对象,然后将这个虚拟文件对象交由through2创造的Transform来改写流中的内容,而外层任务之间通过orchestrator控制,保证执行顺序(如果不了解可以看看这篇翻译文章Gulp思维——Gulp高级技巧)。当然一个插件不需要关心Gulp的任务管理机制,只需要关心传入一个vinyl对象能否正确处理。因此只需要伪造一个虚拟文件对象传给我们的Gulp插件就可以了。

首先设计测试用例,考虑两个主要场景:

  1. 虚拟文件对象是流格式的,应该抛出错误
  2. 虚拟文件对象是Buffer格式的,能够正常对文件内容进行加工,加工完的文件加上// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com的头

对于第一个测试用例,我们需要创建一个流格式的vinyl对象。而对于各第二个测试用例,我们需要创建一个Buffer格式的vinyl对象。

当然,首先我们需要一个被加工的源文件,放到test/src/testfile.js下吧:

'use strict';
console.log('hello world');

这个源文件非常简单,接下来的任务就是把它分别封装成流格式的vinyl对象和Buffer格式的vinyl对象。

构建Buffer格式的虚拟文件对象

构建一个Buffer格式的虚拟文件对象可以用vinyl-fs读取操作系统里的文件生成vinyl对象,Gulp内部也是使用它,默认使用Buffer:

'use strict';
require('should');
const path = require('path');
const vfs = require('vinyl-fs');
const welcome = require('../index');

describe('welcome to Tmall', function() {
  it('should work when buffer', done => {
    vfs.src(path.join(__dirname, 'src', 'testfile.js'))
      .pipe(welcome())
      .on('data', function(vf) {
        vf.contents.toString().should.be.eql(`// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com\n'use strict';\nconsole.log('hello world');\n`);
        done();
      });
  });
});

这样测了Buffer格式后算是完成了主要功能的测试,那么要如何测试流格式呢?

构建流格式的虚拟文件对象

方案一和上面一样直接使用vinyl-fs,增加一个参数buffer: false即可:

把代码修改成这样:

'use strict';
require('should');
const path = require('path');
const vfs = require('vinyl-fs');
const PluginError = require('gulp-util').PluginError;
const welcome = require('../index');

describe('welcome to Tmall', function() {
  it('should work when buffer', done => {
    // blabla
  });
  it('should throw PluginError when stream', done => {
    vfs.src(path.join(__dirname, 'src', 'testfile.js'), {
      buffer: false
    })
      .pipe(welcome())
      .on('error', e => {
        e.should.be.instanceOf(PluginError);
        done();
      });
  });
});

这样vinyl-fs直接从文件系统读取文件并生成流格式的vinyl对象。

如果内容并不来自于文件系统,而是来源于一个已经存在的可读流,要怎么把它封装成一个流格式的vinyl对象呢?

这样的需求可以借助vinyl-source-stream

'use strict';
require('should');
const fs = require('fs');
const path = require('path');
const source = require('vinyl-source-stream');
const vfs = require('vinyl-fs');
const PluginError = require('gulp-util').PluginError;
const welcome = require('../index');

describe('welcome to Tmall', function() {
  it('should work when buffer', done => {
    // blabla
  });
  it('should throw PluginError when stream', done => {
    fs.createReadStream(path.join(__dirname, 'src', 'testfile.js'))
      .pipe(source())
      .pipe(welcome())
      .on('error', e => {
        e.should.be.instanceOf(PluginError);
        done();
      });
  });
});

这里首先通过fs.createReadStream创建了一个可读流,然后通过vinyl-source-stream把这个可读流包装成流格式的vinyl对象,并交给我们的插件做处理

Gulp插件执行错误时请抛出PluginError,这样能够让gulp-plumber这样的插件进行错误管理,防止错误终止构建进程,这在gulp watch时非常有用

模拟Gulp运行

我们伪造的对象已经可以跑通功能测试了,但是这数据来源终究是自己伪造的,并不是用户日常的使用方式。如果采用最接近用户使用的方式来做测试,测试结果才更加可靠和真实。那么问题来了,怎么模拟真实的Gulp环境来做Gulp插件的测试呢?

首先模拟一下我们的项目结构:

test
├── build
│   └── testfile.js
├── gulpfile.js
└── src
    └── testfile.js

一个简易的项目结构,源码放在src下,通过gulpfile来指定任务,构建结果放在build下。按照我们平常使用方式在test目录下搭好架子,并且写好gulpfile.js:

'use strict';
const gulp = require('gulp');
const welcome = require('../index');
const del = require('del');

gulp.task('clean', cb => del('build', cb));

gulp.task('default', ['clean'], () => {
  return gulp.src('src/**/*')
    .pipe(welcome())
    .pipe(gulp.dest('build'));
});

接着在测试代码里来模拟Gulp运行了,这里有两种方案:

  1. 使用child_process库提供的spawnexec开子进程直接跑gulp命令,然后测试build目录下是否是想要的结果
  2. 直接在当前进程获取gulpfile中的Gulp实例来运行Gulp任务,然后测试build目录下是否是想要的结果

开子进程进行测试有一些坑,istanbul测试代码覆盖率时时无法跨进程的,因此开子进程测试,首先需要子进程执行命令时加上istanbul,然后还需要手动去收集覆盖率数据,当开启多个子进程时还需要自己做覆盖率结果数据合并,相当麻烦。

那么不开子进程怎么做呢?可以借助run-gulp-task这个工具来运行,其内部的机制就是首先获取gulpfile文件内容,在文件尾部加上module.exports = gulp;后require gulpfile从而获取Gulp实例,然后将Gulp实例递交给run-sequence调用内部未开放的APIgulp.run来运行。

我们采用不开子进程的方式,把运行Gulp的过程放在before钩子中,测试代码变成下面这样:

'use strict';
require('should');
const path = require('path');
const run = require('run-gulp-task');
const CWD = process.cwd();
const fs = require('fs');

describe('welcome to Tmall', () => {
  before(done => {
    process.chdir(__dirname);
    run('default', path.join(__dirname, 'gulpfile.js'))
      .catch(e => e)
      .then(e => {
        process.chdir(CWD);
        done(e);
      });
  });
  it('should work', function() {
    fs.readFileSync(path.join(__dirname, 'build', 'testfile.js')).toString().should.be.eql(`// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com\n'use strict';\nconsole.log('hello world');\n`);
  });
});

这样由于不需要开子进程,代码覆盖率测试也可以和普通Node.js模块一样了

测试命令行输出

双一个煎蛋的栗子

当然前端写工具并不只限于Gulp插件,偶尔还会写一些辅助命令啥的,这些辅助命令直接在终端上运行,结果也会直接展示在终端上。比如一个简单的使用commander实现的命令行工具:

// in index.js
'use strict';
const program = require('commander');
const path = require('path');
const pkg = require(path.join(__dirname, 'package.json'));

program.version(pkg.version)
  .usage('[options] <file>')
  .option('-t, --test', 'Run test')
  .action((file, prog) => {
    if (prog.test) console.log('test');
  });

module.exports = program;

// in bin/cli
#!/usr/bin/env node
'use strict';
const program = require('../index.js');

program.parse(process.argv);

!program.args[0] && program.help();

// in package.json
{
  "bin": {
    "cli-test": "./bin/cli"
  }
}

拦截输出

要测试命令行工具,自然要模拟用户输入命令,这一次依旧选择不开子进程,直接用伪造一个process.argv交给program.parse即可。命令输入了问题也来了,数据是直接console.log的,要怎么拦截呢?

这可以借助sinon来拦截console.log,而且sinon非常贴心的提供了mocha-sinon方便测试用,这样test.js大致就是这个样子:

'use strict';
require('should');
require('mocha-sinon');
const program = require('../index');
const uncolor = require('uncolor');

describe('cli-test', () => {
  let rst;
  beforeEach(function() {
    this.sinon.stub(console, 'log', function() {
      rst = arguments[0];
    });
  });
  it('should print "test"', () => {
    program.parse([
      'node',
      './bin/cli',
      '-t',
      'file.js'
    ]);
    return uncolor(rst).trim().should.be.eql('test');
  });
});

PS:由于命令行输出时经常会使用colors这样的库来添加颜色,因此在测试时记得用uncolor把这些颜色移除

小结

Node.js相关的单元测试就扯这么多了,还有很多场景像服务器测试什么的就不扯了,因为我不会。当然前端最主要的工作还是写页面,接下来扯一扯如何对页面上的组件做测试。

页面测试

对于浏览器里跑的前端代码,做测试要比Node.js模块要麻烦得多。Node.js模块纯js代码,使用V8运行在本地,测试用的各种各样的依赖和工具都能快速的安装,而前端代码不仅仅要测试js,CSS等等,更麻烦的事需要模拟各种各样的浏览器,比较常见的前端代码测试方案有下面几种:

  1. 构建一个测试页面,人肉直接到虚拟机上开各种浏览器跑测试页面(比如公司的f2etest)。这个方案的缺点就是不好做代码覆盖率测试,也不好持续化集成,同时人肉工作较多
  2. 使用PhantomJS构建一个伪造的浏览器环境跑单元测试,好处是解决了代码覆盖率问题,也可以做持续集成。这个方案的缺点是PhantomJS毕竟是Qt的webkit,并不是真实浏览器环境,PhantomJS也有各种各样兼容性坑
  3. 通过Karma调用本机各种浏览器进行测试,好处是可以跨浏览器做测试,也可以测试覆盖率,但持续集成时需要注意只能开PhantomJS做测试,毕竟集成的Linux环境不可能有浏览器。这可以说是目前看到的最好的前端代码测试方式了

这里以gulp为构建工具做测试,后面在React组件测试部分再介绍以webpack为构建工具做测试

叒一个煎蛋的栗子

前端代码依旧是js,一样可以用Mocha+Should.js来做单元测试。打开node_modules下的Mocha和Should.js,你会发现这些优秀的开源工具已经非常贴心的提供了可在浏览器中直接运行的版本:mocha/mocha.jsshould/should.min.js,只需要把他们通过script标签引入即可,另外Mocha还需要引入自己的样式mocha/mocha.css

首先看一下我们的前端项目结构:

.
├── gulpfile.js
├── package.json
├── src
│   └── index.js
└── test
    ├── test.html
    └── test.js

比如这里源码src/index.js就是定义一个全局函数:

window.render = function() {
  var ctn = document.createElement('div');
  ctn.setAttribute('id', 'tmall');
  ctn.appendChild(document.createTextNode('天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com'));
  document.body.appendChild(ctn);
}

而测试页面test/test.html大致上是这个样子:

<!DOCTYPE html>
<html>

<head>
  <meta charset="utf-8">
  <link rel="stylesheet" href="../node_modules/mocha/mocha.css"/>
  <script src="../node_modules/mocha/mocha.js"></script>
  <script src="../node_modules/should/should.js"></script>
</head>

<body>
  <div id="mocha"></div>
  <script src="../src/index.js"></script>
  <script src="test.js"></script>
</body>

</html>

head里引入了测试框架Mocha和断言库Should.js,测试的结果会被显示在

这个容器里,而test/test.js里则是我们的测试的代码。

前端页面上测试和Node.js上测试没啥太大不同,只是需要指定Mocha使用的UI,并需要手动调用mocha.run()

mocha.ui('bdd');
describe('Welcome to Tmall', function() {
  before(function() {
    window.render();
  });
  it('Hello', function() {
    document.getElementById('tmall').textContent.should.be.eql('天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com');
  });
});
mocha.run();

在浏览器里打开test/test.html页面,就可以看到效果了:

test page

在不同的浏览器里打开这个页面,就可以看到当前浏览器的测试了。这种方式能兼容最多的浏览器,当然要跨机器之前记得把资源上传到一个测试机器都能访问到的地方,比如CDN。

测试页面有了,那么来试试接入PhantomJS吧

使用PhantomJS进行测试

PhantomJS是一个模拟的浏览器,它能执行js,甚至还有webkit渲染引擎,只是没有浏览器的界面上渲染结果罢了。我们可以使用它做很多事情,比如对网页进行截图,写爬虫爬取异步渲染的页面,以及接下来要介绍的——对页面做测试。

当然,这里我们不是直接使用PhantomJS,而是使用mocha-phantomjs来做测试。npm install --save-dev mocha-phantomjs安装完成后,就可以运行命令./node_modules/.bin/mocha-phantomjs ./test/test.html来对上面那个test/test.html的测试了:

PhantomJS test

单元测试没问题了,接下来就是代码覆盖率测试

覆盖率打点

首先第一步,改写我们的gulpfile.js

'use strict';
const gulp = require('gulp');
const istanbul = require('gulp-istanbul');

gulp.task('test', function() {
  return gulp.src(['src/**/*.js'])
    .pipe(istanbul({
      coverageVariable: '__coverage__'
    }))
    .pipe(gulp.dest('build-test'));
});

这里把覆盖率结果保存到__coverage__里面,把打完点的代码放到build-test目录下,比如刚才的src/index.js的代码,在运行gulp test后,会生成build-test/index.js,内容大致是这个样子:

var __cov_WzFiasMcIh_mBvAjOuQiQg = (Function('return this'))();
if (!__cov_WzFiasMcIh_mBvAjOuQiQg.__coverage__) { __cov_WzFiasMcIh_mBvAjOuQiQg.__coverage__ = {}; }
__cov_WzFiasMcIh_mBvAjOuQiQg = __cov_WzFiasMcIh_mBvAjOuQiQg.__coverage__;
if (!(__cov_WzFiasMcIh_mBvAjOuQiQg['/Users/lingyu/gitlab/dev/mui/test-page/src/index.js'])) {
   __cov_WzFiasMcIh_mBvAjOuQiQg['/Users/lingyu/gitlab/dev/mui/test-page/src/index.js'] = {"path":"/Users/lingyu/gitlab/dev/mui/test-page/src/index.js","s":{"1":0,"2":0,"3":0,"4":0,"5":0},"b":{},"f":{"1":0},"fnMap":{"1":{"name":"(anonymous_1)","line":1,"loc":{"start":{"line":1,"column":16},"end":{"line":1,"column":27}}}},"statementMap":{"1":{"start":{"line":1,"column":0},"end":{"line":6,"column":1}},"2":{"start":{"line":2,"column":2},"end":{"line":2,"column":42}},"3":{"start":{"line":3,"column":2},"end":{"line":3,"column":34}},"4":{"start":{"line":4,"column":2},"end":{"line":4,"column":85}},"5":{"start":{"line":5,"column":2},"end":{"line":5,"column":33}}},"branchMap":{}};
}
__cov_WzFiasMcIh_mBvAjOuQiQg = __cov_WzFiasMcIh_mBvAjOuQiQg['/Users/lingyu/gitlab/dev/mui/test-page/src/index.js'];
__cov_WzFiasMcIh_mBvAjOuQiQg.s['1']++;window.render=function(){__cov_WzFiasMcIh_mBvAjOuQiQg.f['1']++;__cov_WzFiasMcIh_mBvAjOuQiQg.s['2']++;var ctn=document.createElement('div');__cov_WzFiasMcIh_mBvAjOuQiQg.s['3']++;ctn.setAttribute('id','tmall');__cov_WzFiasMcIh_mBvAjOuQiQg.s['4']++;ctn.appendChild(document.createTextNode('天猫前端招人\uFF0C有意向的请发送简历至lingyucoder@gmail.com'));__cov_WzFiasMcIh_mBvAjOuQiQg.s['5']++;document.body.appendChild(ctn);};

这都什么鬼!不管了,反正运行它就好。把test/test.html里面引入的代码从src/index.js修改为build-test/index.js,保证页面运行时使用的是编译后的代码。

编写钩子

运行数据会存放到变量__coverage__里,但是我们还需要一段钩子代码在单元测试结束后获取这个变量里的内容。把钩子代码放在test/hook.js下,里面内容这样写:

'use strict';

var fs = require('fs');

module.exports = {
  afterEnd: function(runner) {
    var coverage = runner.page.evaluate(function() {
      return window.__coverage__;
    });
    if (coverage) {
      console.log('Writing coverage to coverage/coverage.json');
      fs.write('coverage/coverage.json', JSON.stringify(coverage), 'w');
    } else {
      console.log('No coverage data generated');
    }
  }
};

这样准备工作工作就大功告成了,执行命令./node_modules/.bin/mocha-phantomjs ./test/test.html --hooks ./test/hook.js,可以看到如下图结果,同时覆盖率结果被写入到coverage/coverage.json里面了。

coverage hook

生成页面

有了结果覆盖率结果就可以生成覆盖率页面了,首先看看覆盖率概况吧。执行命令./node_modules/.bin/istanbul report --root coverage text-summary,可以看到下图:

coverage summary

还是原来的配方,还是想熟悉的味道。接下来运行./node_modules/.bin/istanbul report --root coverage lcov生成覆盖率页面,执行完后open coverage/lcov-report/index.html,点击进入到src/index.js

coverage page

一颗赛艇!这样我们对前端代码就能做覆盖率测试了

接入Karma

Karma是一个测试集成框架,可以方便地以插件的形式集成测试框架、测试环境、覆盖率工具等等。Karma已经有了一套相当完善的插件体系,这里尝试在PhantomJS、Chrome、FireFox下做测试,首先需要使用npm安装一些依赖:

  1. karma:框架本体
  2. karma-mocha:Mocha测试框架
  3. karma-coverage:覆盖率测试
  4. karma-spec-reporter:测试结果输出
  5. karma-phantomjs-launcher:PhantomJS环境
  6. phantomjs-prebuilt: PhantomJS最新版本
  7. karma-chrome-launcher:Chrome环境
  8. karma-firefox-launcher:Firefox环境

安装完成后,就可以开启我们的Karma之旅了。还是之前的那个项目,我们把该清除的清除,只留下源文件和而是文件,并增加一个karma.conf.js文件:

.
├── karma.conf.js
├── package.json
├── src
│   └── index.js
└── test
    └── test.js

karma.conf.js是Karma框架的配置文件,在这个例子里,它大概是这个样子:

'use strict';

module.exports = function(config) {
  config.set({
    frameworks: ['mocha'],
    files: [
      './node_modules/should/should.js',
      'src/**/*.js',
      'test/**/*.js'
    ],
    preprocessors: {
      'src/**/*.js': ['coverage']
    },
    plugins: ['karma-mocha', 'karma-phantomjs-launcher', 'karma-chrome-launcher', 'karma-firefox-launcher', 'karma-coverage', 'karma-spec-reporter'],
    browsers: ['PhantomJS', 'Firefox', 'Chrome'],
    reporters: ['spec', 'coverage'],
    coverageReporter: {
      dir: 'coverage',
      reporters: [{
        type: 'json',
        subdir: '.',
        file: 'coverage.json',
      }, {
        type: 'lcov',
        subdir: '.'
      }, {
        type: 'text-summary'
      }]
    }
  });
};

这些配置都是什么意思呢?这里挨个说明一下:

  • frameworks: 使用的测试框架,这里依旧是我们熟悉又亲切的Mocha
  • files:测试页面需要加载的资源,上面的test目录下已经没有test.html了,所有需要加载内容都在这里指定,如果是CDN上的资源,直接写URL也可以,不过建议尽可能使用本地资源,这样测试更快而且即使没网也可以测试。这个例子里,第一行载入的是断言库Should.js,第二行是src下的所有代码,第三行载入测试代码
  • preprocessors:配置预处理器,在上面files载入对应的文件前,如果在这里配置了预处理器,会先对文件做处理,然后载入处理结果。这个例子里,需要对src目录下的所有资源添加覆盖率打点(这一步之前是通过gulp-istanbul来做,现在karma-coverage框架可以很方便的处理,也不需要钩子啥的了)。后面做React组件测试时也会在这里使用webpack
  • plugins:安装的插件列表
  • browsers:需要测试的浏览器,这里我们选择了PhantomJS、FireFox、Chrome
  • reporters:需要生成哪些代码报告
  • coverageReporter:覆盖率报告要如何生成,这里我们期望生成和之前一样的报告,包括覆盖率页面、lcov.info、coverage.json、以及命令行里的提示

好了,配置完成,来试试吧,运行./node_modules/karma/bin/karma start --single-run,可以看到如下输出:

run karma

可以看到,Karma首先会在9876端口开启一个本地服务,然后分别启动PhantomJS、FireFox、Chrome去加载这个页面,收集到测试结果信息之后分别输出,这样跨浏览器测试就解决啦。如果要新增浏览器就安装对应的浏览器插件,然后在browsers里指定一下即可,非常灵活方便。

那如果我的mac电脑上没有IE,又想测IE,怎么办呢?可以直接运行./node_modules/karma/bin/karma start启动本地服务器,然后使用其他机器开对应浏览器直接访问本机的9876端口(当然这个端口是可配置的)即可,同样移动端的测试也可以采用这个方法。这个方案兼顾了前两个方案的优点,弥补了其不足,是目前看到最优秀的前端代码测试方案了

React组件测试

去年React旋风一般席卷全球,当然天猫也在技术上紧跟时代脚步。天猫商家端业务已经全面切入React,形成了React组件体系,几乎所有新业务都采用React开发,而老业务也在不断向React迁移。React大红大紫,这里单独拉出来讲一讲React+webpack的打包方案如何进行测试

这里只聊React Web,不聊React Native

事实上天猫目前并未采用webpack打包,而是Gulp+Babel编译React CommonJS代码成AMD模块使用,这是为了能够在新老业务使用上更加灵活,当然也有部分业务采用webpack打包并上线

叕一个煎蛋的栗子

这里创建一个React组件,目录结构大致这样(这里略过CSS相关部分,只要跑通了,集成CSS像PostCSS、Less都没啥问题):

.
├── demo
├── karma.conf.js
├── package.json
├── src
│   └── index.jsx
├── test
│   └── index_spec.jsx
├── webpack.dev.js
└── webpack.pub.js

React组件源码src/index.jsx大概是这个样子:

import React from 'react';
class Welcome extends React.Component {
  constructor() {
    super();
  }
  render() {
    return <div>{this.props.content}</div>;
  }
}
Welcome.displayName = 'Welcome';
Welcome.propTypes = {
  /**
   * content of element
   */
  content: React.PropTypes.string
};
Welcome.defaultProps = {
  content: 'Hello Tmall'
};
module.exports = Welcome;

那么对应的test/index_spec.jsx则大概是这个样子:

import 'should';
import Welcome from '../src/index.jsx';
import ReactDOM from 'react-dom';
import React from 'react';
import TestUtils from 'react-addons-test-utils';
describe('test', function() {
  const container = document.createElement('div');
  document.body.appendChild(container);
  afterEach(() => {
    ReactDOM.unmountComponentAtNode(container);
  });
  it('Hello Tmall', function() {
    let cp = ReactDOM.render(<Welcome/>, container);
    let welcome = TestUtils.findRenderedComponentWithType(cp, Welcome);
    ReactDOM.findDOMNode(welcome).textContent.should.be.eql('Hello Tmall');
  });
});

由于是测试React,自然要使用React的TestUtils,这个工具库提供了不少方便查找节点和组件的方法,最重要的是它提供了模拟事件的API,这可以说是UI测试最重要的一个功能。更多关于TestUtils的使用请参考React官网,这里就不扯了…

代码有了,测试用例也有了,接下就差跑起来了。karma.conf.js肯定就和上面不一样了,首先它要多一个插件karma-webpack,因为我们的React组件是需要webpack打包的,不打包的代码压根就没法运行。另外还需要注意代码覆盖率测试也出现了变化。因为现在多了一层Babel编译,Babel编译ES6、ES7源码生成ES5代码后会产生很多polyfill代码,因此如果对build完成之后的代码做覆盖率测试会包含这些polyfill代码,这样测出来的覆盖率显然是不可靠的,这个问题可以通过isparta-loader来解决。React组件的karma.conf.js大概是这个样子:

'use strict';
const path = require('path');

module.exports = function(config) {
  config.set({
    frameworks: ['mocha'],
    files: [
      './node_modules/phantomjs-polyfill/bind-polyfill.js',
      'test/**/*_spec.jsx'
    ],
    plugins: ['karma-webpack', 'karma-mocha',, 'karma-chrome-launcher', 'karma-firefox-launcher', 'karma-phantomjs-launcher', 'karma-coverage', 'karma-spec-reporter'],
    browsers: ['PhantomJS', 'Firefox', 'Chrome'],
    preprocessors: {
      'test/**/*_spec.jsx': ['webpack']
    },
    reporters: ['spec', 'coverage'],
    coverageReporter: {
      dir: 'coverage',
      reporters: [{
        type: 'json',
        subdir: '.',
        file: 'coverage.json',
      }, {
        type: 'lcov',
        subdir: '.'
      }, {
        type: 'text-summary'
      }]
    },
    webpack: {
      module: {
        loaders: [{
          test: /\.jsx?/,
          loaders: ['babel']
        }],
        preLoaders: [{
          test: /\.jsx?$/,
          include: [path.resolve('src/')],
          loader: 'isparta'
        }]
      }
    },
    webpackMiddleware: {
      noInfo: true
    }
  });
};

这里相对于之前的karma.conf.js,主要有以下几点区别:

  1. 由于webpack的打包功能,我们在测试代码里直接import组件代码,因此不再需要在files里手动引入组件代码
  2. 预处理里面需要对每个测试文件都做webpack打包
  3. 添加webpack编译相关配置,在编译源码时,需要定义preLoaders,并使用isparta-loader做代码覆盖率打点
  4. 添加webpackMiddleware配置,这里noInfo作用是不需要输出webpack编译时那一大串信息

这样配置基本上就完成了,跑一把./node_modules/karma/bin/karma start --single-run

react karma

很好,结果符合预期。open coverage/lcov-report/index.html打开覆盖率页面:

react coverage

鹅妹子音!!!直接对jsx代码做的覆盖率测试!这样React组件的测试大体上就完工了

小结

前端的代码测试主要难度是如何模拟各种各样的浏览器环境,Karma给我们提供了很好地方式,对于本地有的浏览器能自动打开并测试,本地没有的浏览器则提供直接访问的页面。前端尤其是移动端浏览器种类繁多,很难做到完美,但我们可以通过这种方式实现主流浏览器的覆盖,保证每次上线大多数用户没有问题。

持续集成

测试结果有了,接下来就是把这些测试结果接入到持续集成之中。持续集成是一种非常优秀的多人开发实践,通过代码push触发钩子,实现自动运行编译、测试等工作。接入持续集成后,我们的每一次push代码,每个Merge Request都会生成对应的测试结果,项目的其他成员可以很清楚地了解到新代码是否影响了现有的功能,在接入自动告警后,可以在代码提交阶段就快速发现错误,提升开发迭代效率。

持续集成会在每次集成时提供一个几乎空白的虚拟机器,并拷贝用户提交的代码到机器本地,通过读取用户项目下的持续集成配置,自动化的安装环境和依赖,编译和测试完成后生成报告,在一段时间之后释放虚拟机器资源。

开源的持续集成

开源比较出名的持续集成服务当属Travis,而代码覆盖率则通过Coveralls,只要有GitHub账户,就可以很轻松的接入Travis和Coveralls,在网站上勾选了需要持续集成的项目以后,每次代码push就会触发自动化测试。这两个网站在跑完测试以后,会自动生成测试结果的小图片

build result

Travis会读取项目下的travis.yml文件,一个简单的例子:

language: node_js
node_js:
  - "stable"
  - "4.0.0"
  - "5.0.0"
script: "npm run test"
after_script: "npm install coveralls@2.10.0 && cat ./coverage/lcov.info | coveralls"

language定义了运行环境的语言,而对应的node_js可以定义需要在哪几个Node.js版本做测试,比如这里的定义,代表着会分别在最新稳定版、4.0.0、5.0.0版本的Node.js环境下做测试

而script则是测试利用的命令,一般情况下,都应该把自己这个项目开发所需要的命令都写在package.json的scripts里面,比如我们的测试方法./node_modules/karma/bin/karma start --single-run就应当这样写到scripts里:

{
  "scripts": {
    "test": "./node_modules/karma/bin/karma start --single-run"
  }
}

而after_script则是在测试完成之后运行的命令,这里需要上传覆盖率结果到coveralls,只需要安装coveralls库,然后获取lcov.info上传给Coveralls即可

更多配置请参照Travis官网介绍

这样配置后,每次push的结果都可以上Travis和Coveralls看构建和代码覆盖率结果了

travis

coveralls

小结

项目接入持续集成在多人开发同一个仓库时候能起到很大的用途,每次push都能自动触发测试,测试没过会发生告警。如果需求采用Issues+Merge Request来管理,每个需求一个Issue+一个分支,开发完成后提交Merge Request,由项目Owner负责合并,项目质量将更有保障

总结

这里只是前端测试相关知识的一小部分,还有非常多的内容可以深入挖掘,而测试也仅仅是前端流程自动化的一部分。在前端技术快速发展的今天,前端项目不再像当年的刀耕火种一般,越来越多的软件工程经验被集成到前端项目中,前端项目正向工程化、流程化、自动化方向高速奔跑。还有更多优秀的提升开发效率、保证开发质量的自动化方案亟待我们挖掘。


Why You Can’t Trust GPS in China

$
0
0

Why You Can’t Trust GPS in China
by Geoff Manaugh February 26, 2016

One of the most interesting, if unanticipated, side effects of modern copyright law is the practice by which cartographic companies will introduce a fake street—a road, lane, or throughway that does not, in fact, exist on the ground—into their maps. If that street later shows up on a rival company’s products, then they have all the proof they need for a case of copyright infringement. Known as trap streets, these imaginary roads exist purely as figments of an overactive legal imagination.

Trap streets are also compelling evidence that maps don’t always equal the territory. What if not just one random building or street, however, but an entire map is deliberately wrong? This is the strange fate of digital mapping products in China: there, every street, building, and freeway is just slightly off its mark, skewed for reasons of national and economic security.

The result is an almost ghostly slippage between digital maps and the landscapes they document. Lines of traffic snake through the centers of buildings; monuments migrate into the midst of rivers; one’s own position standing in a park or shopping mall appears to be nearly half a kilometer away, as if there is more than one version of you on the loose. Stranger yet, your morning running route didn’t quite go where you thought it did.

It is, in fact, illegal for foreign individuals or organizations to make maps in China without official permission. As stated in the “Surveying and Mapping Law of the People’s Republic of China,” for example, mapping—even casually documenting “the shapes, sizes, space positions, attributes, etc. of man-made surface installations”—is considered a protected activity for reasons of national defense and “progress of the society.” Those who do receive permission must introduce a geographic offset into their products, a kind of preordained cartographic drift. An entire world of spatial glitches is thus deliberately introduced into the resulting map.

The central problem is that most digital maps today rely upon a set of coordinates known as the World Geodetic System 1984, or WGS-84; the U.S. National Geospatial-Intelligence Agency describes it as “the reference frame upon which all geospatial-intelligence is based.” However, as software engineer Dan Dascalescu writes in a Stack Exchange post, digital mapping products in China instead use something called “the GCJ-02 datum.” As he points out, an apparently random algorithmic offset “causes WGS-84 coordinates, such as those coming from a regular GPS chip, to be plotted incorrectly on GCJ-02 maps.” GCJ-02 data are also somewhat oddly known as “Mars Coordinates,” as if describing the geography of another planet. Translations back and forth between these coordinate systems—to bring China back to Earth, so to speak—are easy enough to find online, but they are also rather intimidating to non-specialists.

While algorithmic offsets introduced into digital maps might sound like nothing more than a matter of speculative concern—something more like a dinner conversation for fans of William Gibson novels—it is actually a very concrete issue for digital product designers. Releasing an app, for example, whose location functions do not work in China has immediate and painfully evident user-experience, not to mention financial, implications.

Shanghai China Map
Google Maps
One such app designer posted on the website Stack Overflow to ask about Apple’s “embeddable map viewer.” To make a long story short, when used in China, Apple’s maps are subject to “a varying offset [of] 100-600m which makes annotations display incorrectly on the map.” In other words, everything there—roads, nightclubs, clothing stores—appears to be 100-600 meters away from its actual, terrestrial position. The effect of this is that, if you check the GPS coordinates of your friends, as blogger Jon Pasden writes, “you’ll likely see they’re standing in a river or some place 500 meters away even if they’re standing right next to you.”

The same thread on Stack Overflow goes on to explain that Google also has its own algorithmically derived offset, known as “_applyChinaLocationShift” (or more humorously as “eviltransform”). The key, of course, to offering an accurate app is to account for this Chinese location shift before it ever happens—to distort the distortions before they occur.

In addition to all this, Chinese geographic regulations demand that GPS functions must either be disabled on handheld devices or they must be made to display a similar offset. If a given device—such as a smartphone or camera—detects that it is in China, then its ability to geo-tag photos is either temporarily unavailable or strangely compromised. Once again, you would find that your hotel is not quite where your camera wants it to be, or that the restaurant you and your friends want to visit is not, in fact, where your smartphone thinks it has guided you. Your physical footsteps and your digital tracks no longer align.

It is worth pointing out that this raises interesting geopolitical questions. If a traveler finds herself in, say, Tibet or on a short trip to the artificial islands of the South China Sea—or perhaps simply in Taiwan—are she and her devices really “in China”? This seemingly abstract question might already be answered, without the traveler even knowing that it’s been asked, by circuits inside her phone or camera. Depending on the insistence of China’s territorial claims and the willingness of certain manufacturers to acknowledge those assertions, a device might no longer offer accurate GPS readings.

Put another way, you might not think you’ve crossed an international border—but your devices have. This is just one, relatively small example of how complex geopolitical questions can be embedded in the functionality of our handheld devices: cameras and smartphones are suddenly thrust to the front line of much larger conversations about national sovereignty.

These sorts of examples might sound like inconsequential travelers’ trivia, but for China, at least, cartographers are seen as a security threat: China’s Ministry of Land and Resources recently warned that “the number of foreigners conducting surveys in China is on the rise,” and, indeed, the government is increasingly cracking down on those who flout the mapping laws. Three British geology students discovered this the hard way while “collecting data” on a 2009 field trip through the desert state of Xinjiang, a politically sensitive area in northwest China. The students’ data sets were considered “illegal map-making activities,” and they were fined nearly $3,000.

What remains so oddly compelling here is the uncanny gulf between the world and its representations. In a well-known literary parable called “On Exactitude in Science,” from Collected Fictions, Argentine fabulist Jorge Luis Borges describes a kingdom whose cartographic ambitions ultimately get the best of it. The imperial mapmakers, Borges writes, devised “a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.” This 1:1 map, however, while no doubt artistically and conceptually wondrous, was seen as utterly useless by future generations. Rather than enlighten or educate, this sprawling and inescapable super-map merely smothered the very territory whose connections it sought to clarify.

Mars Coordinates, eviltransform, _applyChinaLocationShift, the “China GPS Offset Problem”—whatever name you want to describe this contemporary digital phenomenon of full-scale digital maps sliding precariously away from their referents, the gap between map and territory is suitably Borgesian.

Indeed, Borges ends his tiniest of parables with an image of animals and beggars living wild amidst the “tattered ruins” of an abandoned map, unaware of what its original purpose might have been—perhaps foreshadowing the possibility that travelers several decades from now will wander amidst remote Chinese landscapes with outdated GPS devices in hand, marveling at their apparent discovery of some parallel, dislocated version of the world that had been hiding in plain view.

Geoff wishes to thank Twitter user @0xdeadbabe for first pointing out “Mars Coordinates” to him. Follow Geoff on Twitter at @bldgblog.

Promoted Stor


Webpack + React 开发之路

$
0
0

杂七杂八的想法

记得大二的时候刚学习 Java,我做的第一个图形化用户界面是一个仿QQ的登录窗口,其实就是一些输入框和按钮,但是记得当时觉得超级有成就感,于是后来开始喜欢上写 Java,还做了很多小游戏像飞机大战、坦克大战啥的,自己还觉得特别有意思。
后来开始学前端,其实想想也是做图形化用户界面,不过是换了一个运行环境而已。但是写着写着发现很不顺手,和用 Java 写感觉很不一样,到底哪不对呢。
用 Java 写界面的时候,按钮是按钮,输入框是输入框,我做登录窗口的时候,只要定义一个登录窗口类,然后设置布局、把按钮、输入框加进去,一个登录窗口就出来了。
反观前端的实现,要写一个登录窗口,得先在 html 里定义结构,在 css 里制定样式,然后在 js 里添加行为,最头疼的是 js 里不仅仅只是这个登录窗口的行为,还有页面初始化的代码、别的按钮的监听等等等等一大堆乱七八糟的代码(作为菜鸟的自我吐槽)
其实我理解的以上问题的关键词就是 组件化 ,之所以以前写的那么别扭,很大程度上是自己带着组件化的思想,但是写不出组件化的代码。

直到现在使用上 React,真是感觉眼前一亮。当然还有很多很多需要学习的地方,就从现在开始,配合着 Webpack,踏上 React 的开发之路吧。

制作一个微博发送表单

下面通过 React 编写一个简单的例子,就是常用的微博发送的表单。

一、新建项目

项目目录如下:

/js
-- /components
---- /Publisher
------ Publish.css
------ Publish.jsx
-- app.js
/css
-- base.css
index.html
webpack.config.js
  • js/components 目录存放所有的组件,比如 Publisher 是我们的表单组件,里面存放这个表单的子组件(如果有的话)、组件的 jsx 文件以及组件自己的样式。
  • js/app.js 是入口文件
  • css 存放全局样式
  • index.html 主页
  • webpack.config.js webpack 的配置文件

二、配置 Webpack

编辑 webpack.config.js

var webpack = require('webpack');

module.exports = {
    entry: './js/app.js',
    output: {
        path: __dirname,
        filename: 'bundle.js'
    },
    module: {
        loaders: [
            {
                test: /\.jsx?$/,
                loader: 'babel',
                query: {
                    presets: ['react', 'es2015']
                }
            },
            {
                test: /\.css$/,
                loader: 'style!css'
            }
        ]
    },
    plugins: [
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            }
        })
    ]
}

上一篇文章 里是使用 webpack 进行 ES6 开发,其实不管是 ES6 也好,React 也好,webpack 起到的是一个打包器的作用,配置项和这里大致相似,就不再赘述。

不同的是在 babel-loader 里增加了 react 的转码规则。

另外这里使用到了 webpack 的一个内置插件 UglifyJsPlugin,通过他可以对生成的文件进行压缩。详细的介绍请看这里

三、安装一系列东东

首先保证安装了 nodejs 。

1) 初始化项目

npm init

2) 安装 webpack

npm install webpack -g

3) 安装 React

npm install react react-dom --save-dev

4) 安装加载器

本项目使用到的有 babel-loader、css-loader、style-loader。

  • babel-loader 进行转码
  • css-loader 对 css 文件进行打包
  • style-loader 将样式添加进 DOM 中

详细请看这里

npm install babel-loader css-loader style-loader --save-dev

5) 安装转码规则

npm install babel-preset-es2015 babel-preset-react --save-dev

四、码代码

index.html 中,引用的 js 文件是通过 webpack 生成的 bundle.jscss 文件是写在 /css 目录下的 base.css

index.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Document</title>
    <link rel="stylesheet" href="css/base.css">
</head>
<body>
    
http://bundle.js </body> </html>

/css/base.css

base.css 里面存放的是全局样式,也就是与组件无关的。

html, body, textarea {
    padding: 0;
    margin: 0;
}

body {
    font: 12px/1.3 'Arial','Microsoft YaHei';
    background: #73a2b0;
}

textarea {
    resize: none;
}

a {
    color: #368da7;
    text-decoration: none;
}

/js/app.js

/js/app.js 是入口文件,引入了 Publisher 组件

import React from 'react';
import ReactDOM from 'react-dom';
import Publisher from './components/Publisher/Publisher.jsx';

ReactDOM.render(
    <Publisher />,
    document.getElementById('container')
);

/js/components/Publisher/Publisher.jsx

好的,下面开始编写组件,首先,确定这个组件的组成部分,因为是一个简单的表单,所以不需要继续划分子组件

表单分为上中下三部分,title 里面包含热门微博和剩余字数的提示,textElDiv 包含输入框,btnWrap 包含发布按钮。

import React from 'react';

class Publisher extends React.Component {
    constructor(...args) {
        super(...args);
    }

    render() {
        return (
            
还可以输入140
</div>
</div> ); } } export default Publisher;

我们暂时通过 className 给组件定义了样式名,但还没有实际写样式代码,因为要保证组件的封装性,所以我们不希望组件的样式编写到全局中去以免影响其他组件,最好像我们的目录划分一样,组件自己的样式跟着组件自己走,而且这个样式不影响其他组件。这里就需要用到 css-loader了。

css-loader 可以将 css 文件进行打包,而且可以对 css 文件里的 局部 className 进行哈希编码。这意味着可以这样写样式文件:

/* xxx.css */

:local(.className) { background: red; }
:local .className { color: green; }
:local(.className .subClass) { color: green; }
:local .className .subClass :global(.global-class-name) { color: blue; }

经过处理之后,则变成:

._23_aKvs-b8bW2Vg3fwHozO { background: red; }
._23_aKvs-b8bW2Vg3fwHozO { color: green; }
._23_aKvs-b8bW2Vg3fwHozO ._13LGdX8RMStbBE9w-t0gZ1 { color: green; }
._23_aKvs-b8bW2Vg3fwHozO ._13LGdX8RMStbBE9w-t0gZ1 .global-class-name { color: blue; }

也就是我们可以在不同的组件样式中定义 .btn 的样式名,但是经过打包之后,在全局里面就被转成了不同的哈希编码,由此解决了 css 全局命名冲突的问题。

关于 css-loader 更详细的使用,请参考这里

那么 Publisher 的样式如下:

/js/components/Publisher/Publisher.css

:local .publisher{
    width: 600px;
    margin: 10px auto;
    background: #ffffff;
    box-shadow: 0 0 2px rgba(0,0,0,0.15);
    border-radius: 2px;
    padding: 15px 10px 10px;
    height: 140px;
    position: relative;
    font-size: 12px;
}

:local .title{
    position: relative;
}

:local .title div {
    position: absolute;
    right: 0;
    top: 2px;
}

:local .tips {
    color: #919191;
    display: none;
}

:local .textElDiv {
    border: 1px #cccccc solid;
    height: 68px;
    margin: 25px 0 0;
    padding: 5px;
    box-shadow: 0px 0px 3px 0px rgba(0,0,0,0.15) inset;
}

:local .textElDiv textarea {
    border: none;
    border: 0px;
    font-size: 14px;
    word-wrap: break-word;
    line-height: 18px;
    overflow-y: auto;
    overflow-x: hidden;
    outline: none;
    background: transparent;
    width: 100%;
    height: 68px;
}

:local .btnWrap {
    float: right;
    padding: 5px 0 0;
}

:local .publishBtn {
    display: inline-block;
    height: 28px;
    line-height: 29px;
    width: 60px;
    font-size: 14px;
    background: #ff8140;
    border: 1px solid #f77c3d;
    border-radius: 2px;
    color: #fff;
    box-shadow: 0px 1px 2px rgba(0,0,0,0.25);
    padding: 0 10px 0 10px;
    text-align: center;
    outline: none;
}

:local .publishBtn.disabled {
    background: #ffc09f;
    color: #fff;
    border: 1px solid #fbbd9e;
    box-shadow: none;
    cursor: default;
}

然后就可以在 Publisher.jsx 中这样使用了

import React from 'react';
import style from './Publisher.css';

class Publisher extends React.Component {
    constructor(...args) {
        super(...args);
    }

    render() {
        return (
            
还可以输入140
</div>
</div> ); } } export default Publisher;

这样组件的样式已经添加进去了,接下来就纯粹是进行 React 开发了。

编写 Publisher.jsx

表单的需求如下:

  1. 输入框获取焦点时,输入框边框变为橙色,右上角显示剩余字数的提示;输入框失去焦点时,输入框边框变为灰色,右上角显示热门微博。
  2. 输入字数小于且等于140字时,提示显示剩余可输入字数;输入字数大于140时,提示显示已经超过字数。
  3. 输入字数大于0且不大于140字时,按钮为亮橙色且可点击,否则为浅橙色且不可点击。

首先,给 textarea 添加 onFocusonBluronChange 事件,通过 handleFocushandleBlurhandleChange 来处理输入框获取焦点、失去焦点和输入。

然后将输入的内容保存在 state 里,这样每当内容发生变化时,就能方便的对变化进行处理。

对于按钮的变化、热门微博和提示之间的转换,根据 state 中内容的变化来切换样式就能轻松地做到。

完整代码如下:

import React from 'react';
import style from './Publisher.css';

class Publisher extends React.Component {
    constructor(...args) {
        super(...args);
        // 定义 state
        this.state = {
            content: ''
        }
    }

    /**
    * 获取焦点
    **/
    handleFocus() {
        // 改变边框颜色
        this.refs.textElDiv.style.borderColor = '#fa7d3c';
        // 切换右上角内容
        this.refs.hot.style.display = 'none';
        this.refs.tips.style.display = 'block';
    }

    /**
    * 失去焦点
    **/
    handleBlur() {
        // 改变边框颜色
        this.refs.textElDiv.style.borderColor = '#cccccc';
        // 切换右上角内容
        this.refs.hot.style.display = 'block';
        this.refs.tips.style.display = 'none';
    }

    /**
    * 输入框内容发生变化
    **/
    handleChange(e) {
        // 改变状态值
        this.setState({
            content: e.target.value
        });
    }

    render() {
        return (
            
{this.state.content.length > 140 ? '已超出' : '还可以输入'}{this.state.content.length > 140 ? this.state.content.length - 140 : 140 - this.state.content.length}
</div>
</div> ); } } export default Publisher;

五、运行

  • 通过 --display-error-detail 可以显示 webpack 出现错误的中间过程,方便在出错时进行查看。
  • --progress --colors 可以显示进度
  • --watch 可以监视文件的变化并在变化后重新加载

运如如下:

webpack --display-error-detail --progress --colors --watch

React.js Conf: The Good Parts

$
0
0

React.js Conf: The Good Parts

I had the amazing opportunity to attend React.js conf in San Fransisco on 22nd/23rd of February thanks to a generous diversity scholarship from Facebook! Two full days of talks from the creators, contributors and users of React.js, and 600+ React enthusiasts from around the world there to take it all in. It was a great chance to meet other React-ers and share ideas in a place where it is completely acceptable to take your laptop out at breakfast or the bar and talk shamelessly about code.

From what I heard, tickets this year were incredibly hard to get and if you didn’t have time to watch the live stream, here’s my summary of ‘React.js Conf: The Good Parts’ and a plethora of links for cool resources I heard about from the talks and talking to other people.

Nick Schrock’s Keynote really set the tone for the conference, highlighting how React has grown from a JavaScript library into its own ecosystem that can fundamentally advance web development and React Native is completely changing the way mobile apps are being built, with cross-stack engineers replacing platform specific roles and teams. He also pointed out some pretty impressive figures — like the fact that the Facebook Ads manager and Groups iOS and Android apps share 85–90% of their React Native code and were built with a single team! Many of the features on the Facebook app are also already in React Native, one being the Facebook Friend’s day video which you might have seen a couple of weeks ago!

Ben Alpert’s talk on ‘Making great React Apps’ touched on several ideas of what still needed to be improved in React and React Native including animations, gestures, rendering fast lists and tools for improving developer experience across React and React Native — like what if you could remove the need for setting up webpack/babel to quickly prototype a new project with just one file e.g. create app.js and just call ‘react run platform=ios’?

The announcement of Draft.js, a rich text editing library for React from Facebook, got everyone pretty excited! Making input text bold, cut, copy, paste, adding custom transformations like the ‘mentions/check-ins’ that can be added for statuses on Facebook — Draft.js makes this all infinitely easier for your React Apps. Isaac Salier-Hellendang’s talk explained how the library takes the good parts of the ‘contentEditable’ browser feature (native cursor and selection behaviour, native input events & key events, all rich text features, automatic autogrowing of elements, accessibility and that it works in all browsers) and applies the principles of React to turn it into a controlled component — like a Text Input field with an onChange event handler and the input value saved to a state. I definitely recommend watching the full talk if you’re interested in the implementation details.

Data handling in React applications was a recurring theme throughout the two days with multiple talks highlighting the many different options out there. Lin Clark’s talk illustrated with her quirky code-cartoons made for a fun and extremely clear introduction to Flux, Redux and Relay.

In short, Flux is out, Relay is a bit too complicated to start with and Redux wins.

But seriously, Redux cuts some of the complexity of flux by using functional composition instead of callback registration, has a single store with immutable state, is super declarative, is great for testing and makes hot reloading and time travel debugging possible. Relay on the other hand requires a GraphQL server which is a beast in itself and takes a lot more set up, but has the additional benefits of being able to handle caching, query optimisation and network errors and the readability that comes from co-location of queries and views. Relay also allows deferred queries (e.g. retrieval of the title and text of an article and comments later) and reduction in the the size of queries — relay retrieves data and puts it in a local cache so some query data can just be retrieved from the cache. Jared Forsyth’s talkintroduced Re-frame and Om/next, both ClojureScript libraries. Re-frame is like Redux but uses subscriptions to define how to get data from state, which can be memoized so subscriptions are reused between components and for those of you familiar with Redux, there’s no need to do mapStateToProps(state){} in the container. Om/next is a Relay like library but without the need for a GraphQL server.

Optimisation and performance improvement were also mentioned repeatedly. Aditya Punjani’s talk about optimising the FlipKart mobile website for 2G connections in India introduced two really interesting ideas: the App Shell architecture instead of traditional server-side rendering (breaking down the app into loading state, with placeholders for data and loaded state, with the loading state being displayed in the first paint of the page) and service workers, a very cool browser API which can be used to intercepts all network requests so you can choose to either retrieve data from cache (for offline use) or from the network.

Bhuwan Khattar suggested some methods for speeding up start up time. Branches in code — e.g. when a/b testing can lead to slow start up times as all the modules for each branch need to be downloaded leading to lots of unneccessary overhead. This is difficult to optimise because the branch chosen might be dependent on runtime data. One of the solutions he suggested was inline requires for lazy execution — i.e. only require things when they are necessary rather than requiring all the modules at the top of the file. Bhuwan also suggested using a helper function ‘matchRoute’ which does pattern matching based on the route name and only conditionally downloads and executes the code for each route. The example code below is from this great article by Jan Pojer about routing (specifically with Relay and react-router).

Another solution used by Facebook is to use a wrapper instead of ‘require’ — use the wrapper to require dependencies that are not needed on initial render — these are downloaded as necessary with a loading indicator being shown in the meantime.

In Tadeu Zagallo’s talk on ‘Optimising React Native’ he showed how to profile apps using the simulator in Xcode — bring up the developer menu inside the simulator (Cmd-Z) and click ‘start profiling’ and then view in chrome. This brings up a handy menu in Chrome so you can see which functions take the longest to run. He also mentioned two things to remember: always add component keys for lists — React’s DOM diffing algorithm uses the keys to check which items need re-rendering so add keys to make sure all the list items are not re-rendered and try to always use the ‘shouldComponentUpdate’ lifecycle method to prevent re-render unless necessary.

One of the most exciting things for me was hearing about the new Navigation API in React-Native from Eric Vicenti. After struggling with the Navigator component for months, the new declarative version sounds like a much better solution, borrowing heavily from Redux to remove the state from within the component and using actions for transitioning between views. The new changes should also support deep linking with URIS, a feature thats highly desirable in mobile apps. I still haven’t had a look at the new version properly but it’s now available as ‘NavigationExperimental’ in the latest release of React Native.

Leland Richardson’s talk about testing React Native was also super exciting! Not only has he created a library ‘Enzyme’ to help with traversing the DOM tree when shallow rendering React, he’s only just gone and created a complete mock of the entire React Native API! Enzyme has shallow, mount and render methods as well as methods to find nodes of a specifc type in the tree. And he’s also created some handy examples of how to use both libraries (links at the end)!

Jamison Dance’s talk has really made me want to try the Elm language! I’d only ever heard of Elm so it was completely new to me but in short, Elm is a functional programming language that transpiles to JavaScript and runs in the browser. It has a static type system (so no run time errors!) and there’s no ‘null’!. Elm only has stateless functions and only immutable data. Jamison showed how Elm applications have a similar tree architecture to React apps with parent components passing data down to child components which then respond to user interactions by sending data back to their parents which then update the top level app state and cause the App to re-render. Updating of state in React can be done in a number of ways including different libraries discussed earlier like Redux or Flux, but in Elm there’s a built in system for updating state using Observables. The tradeoff is between constraints and guidelines — if you trust the language designers to have made good decisions for you, it eliminates the need to try and decide between different libraries for your application and you can focus on the problems specific to your application domain. Jamison suggested that learning Elm could help you become a better React Developer and I think I’m going to give it a go!

There were also some cool tips/ideas from the lightning talks:
* A way of making React Native code more reusable between iOS and Android — create a wrapper for components that uses Platform.OS to check the operating system
* Nuclide IDE support for react-native to make the developer experience just like the web. There’s now a react-native-debugger, react-native-inspector and the ability to add breakpoints!
* React Native for web! A way to build truly platform agnostic code, enable universal rendering and it comes with built in web accesibility!

And then there were some of the wackier talks on Virtual Reality , Open-GL effects and making an arduino-raspberry-pi-React powered version of Jeopardy!!

Overall the conference was an amazing experience and my list of ‘new things to learn’ has now grown completely out of hand!

Links

Libraries/APIs

For React:
* Draft.js — Rich Text Editing with React
* Email templating using React — Oy-Vey
* Gatsbyjs static site generator using React + Markdown
* Open GL for React!
* gl-react
* gl-react-dom
* gl-react-inspector
* GL Sandbox
* Falcor — Data fetching library by Netflix
* Cycle.js — data flow architecture based on observables
* Enzyme — JavaScript Testing utility for React that mimicks jQuery’s API for DOM traversal
* Guide for using Enzyme with Webpack

For React Native
* NavigationExperimental — new declarative Navigator API
* Cordova plugins for React Native
* Open GL for React Native — gl-react-native
* React Native Web
* A complete mock of React Native
* Guide for testing React Native with Enzyme
* Example React Native tests with Enzyme

Random…
* Service Workers to support offline experiences, push notifications and loads more
* Push Notifications using Google Cloud Messaging
* Chrome api for speech recognition
* Webpack plugin to install modules from ‘import’ statements (thanks Eric Clemmons!)
* API Archive of Jeopardy Questions!!
* Track.js — Monitor and report JS errors in web applications
* Raygun.io — Crash reporting

Developer Tools

* React Native plugin for Visual Studio Code
* Deco IDE for React Native
* Nuclide with React Native including react-native-debugger, ability to add breakpoints and react native inspector, flow support
* HockeyApp — distribute beta versions of apps without using the app store

Explanations/Tutorials

https://github.com/nikhilaravi/reactconf2016



Running Mocha + Istanbul + Babel

$
0
0

http://stackoverflow.com/questions/33621079/running-mocha-istanbul-babel

Using Babel 6.x, let’s say we have file test/pad.spec.js:

import pad from '../src/assets/js/helpers/pad';
import assert from 'assert';

describe('pad', () => {
  it('should pad a string', () => {
    assert.equal(pad('foo', 4), '0foo');
  });
});

Install a bunch of crap:

$ npm install babel-istanbul babel-cli babel-preset-es2015 mocha

Create a .babelrc:

{
  "presets": ["es2015"]
}

Run the tests:

$ node_modules/.bin/babel-node node_modules/.bin/babel-istanbul cover \
node_modules/.bin/_mocha -- test/pad.spec.js


  pad
     should pad a string


  1 passing (8ms)

=============================================================================
Writing coverage object [/Volumes/alien/projects/forked/react-flux-puzzle/coverage/coverage.json]
Writing coverage reports at [/Volumes/alien/projects/forked/react-flux-puzzle/coverage]
=============================================================================

=============================== Coverage summary ===============================
Statements   : 100% ( 4/4 )
Branches     : 66.67% ( 4/6 ), 1 ignored
Functions    : 100% ( 1/1 )
Lines        : 100% ( 3/3 )
================================================================================

UPDATE: I’ve had success using nyc (which consumes istanbul) instead of istanbul/babel-istanbul. This is somewhat less complicated. To try it:

Install stuff (you can remove babel-istanbul and babel-cli):

$ npm install babel-core babel-preset-es2015 mocha nyc

Create .babelrc as above.

Execute this:

$ node_modules/.bin/nyc --require babel-core/register node_modules/.bin/mocha \
test/pad.spec.js

…which should give you similar results. By default, it puts coverage info into .nyc-output/, and prints a nice text summary in the console.

Note: You can remove node_modules/.bin/ from any of these commands when placing the command in package.json‘s scripts field.


Developing React Native Android Apps with Linux

$
0
0

Developing React Native Android Apps with Linux

Posted by | October 19, 2015 | Apps, Blog, Programming | 4 Comments

React Native is Facebook’s open source framework for building native applications on iOS and Android. It achieves this by providing a common developer experience so that a developer learns one set of tools and can apply it to both platforms. It takes the component approach used by React and applies it to the mobile app world.

As part of our research and development work here at Black Pepper, we’ve been investigating how we can make use of React Native to write Android and iOS apps for our customers. Because we predominantly use Linux as our development environment, I ran into a few issues along the way and this guide will hopefully help others avoid the same pitfalls.

Why do we need this React Native guide?

Facebook have provided a great deal of information about React Native on github, however because it’s early days for React Native, and Facebook use OS X for development, getting up and running on Linux isn’t as straightforward as it should be. Additionally, the Android version of React Native hasn’t been publicly available for very long, so there are fewer resources available.

Approach

I’m a big fan of using Docker to isolate tools, particularly when testing out something new. It also means you can get a new development up and running in no time; simply pull down the docker images you need. So to get up and running with React Native, I followed the Getting Started guide but added everything to a Dockerfile as I went.

The Docker image is created in such a way that it will have access to a directory on the host machine for code storage, allowing you to use whichever editor or IDE you prefer, along with any other day to day tools such as git. Additionally I made use of privileged mode so that the docker container will be able to access the USB ports of the host, so the React Native app can be run on a physical device, rather than just in the emulator.

Prerequisites

You’ll need to have installed Docker and familiarised yourself with how it works if you want to try this out.
You’ll also need to have familiarised yourself with Android app development, React and React Native.

Installation

Currently the Dockerfile only exists in my personal github repo but I’m in the process of moving it to the Black Pepper repoand from there I’ll publish the image so you’ll be able to retrieve it with a simple “docker pull”.

However, for the time being you can obtain and build the image as follows:

git clone https://github.com/gilesp/docker.git
docker build -t react-native docker/react_native

There are a couple of shell script files in the repo, the most interesting ones are “react-native.sh” and “react_bash.sh” (The inconsistency in naming is one of the things I need to sort out before pushing everything to the Black Pepper repos). Add these shell scripts to your path, as follows:

ln -s path/to/dockerrepo/react_native/react-native.sh bin/react-native
ln -s path/to/dockerrepo/react_native/react_bash.sh bin/react-bash

Usage

Note: The shell scripts assume that your current working directory is where you’ll be storing the react native project and so they map them as a volume in the docker container.

Starting a new project

react-native init AwesomeProject

This will create a directory called AwesomeProject in your current working directory and populate it with the React Native app infrastructure.

Running your project

Plug in an android device to your machine via USB.

Now, start a shell in the Docker container

cd AwesomeProject
react-bash

Now we need to create a reverse tcp connection with adb, allowing the app on the phone to communicate with the nodejs server running in the container:

adb reverse tcp:8081 tcp:8081

And finally, start the server and deploy the app to your device:

react-native start > react-start.log 2>&1 &
react-native run-android

Note: The run-android command should start the server automatically but I found it to be unreliable, hence why I manually start it first.

If all goes well, then the application will launch on your device. Shake the phone to open the developer menu and turn on Reload JS. Now, whenever you make changes to the react native source code (such as editing index.android.js), the changes will instantly appear on the device.

Summary

Working with React Native under Linux turned out to be relatively painless, although there are still some issues I’d like to resolve with the docker image. The main one is that I have, so far, been unable to get the emulator to work correctly, due to issues with 64-bit support. I can run an emulator on the host machine and have react native use that (in the same way as it would use a real device via USB), but it’d be good to remove that dependency and have it entirely containerised.

Other than that though, it’s entirely feasible to develop Android apps using React Native under Linux.


fir.im Weekly –不能错过的 GitHub Top 100 开源库

$
0
0

好的工具&资源,会带来更多的灵感。本期 fir.im Weekly 精选了一些实用的 iOS,Android 的使用工具和源码分享,还有前端、UI方面的干货。一起来看下:)

Swift 开源项目精选

@SwiftLanguage分享。

“基于《Swift 语言指南》开源项目收录,做了一个甄别、筛选,并辅以一句话介绍。来源 GitHub: ”Github 的 Swift 库已尽收眼底,简洁明了,还在不断更新中正在学习 Swift 的同学不要错过–>>Swift 开源项目精选.

xcbuild – Facebook 出品的开源 App 构建工具

xcbuild 是 Facebook 出品的开源 App 构建工具,能够为 App 构建过程与多平台运行提供更快构建、更好文档并兼容Xcode。Github 地址–> https://github.com/facebook/xcbuild .

Swift 烧脑体操

@唐巧_boy 出了一系列的【Swift 烧脑体操】的文章,文如题目,涨姿势必备,文章列表如下:

Swift 烧脑体操(一) – Optional 的嵌套

Swift 烧脑体操(二) – 函数的参数

Swift 烧脑体操(三) – 高阶函数

Swift 烧脑体操(四) – map 和 flatMap

GitHub Top 100的Android&iOS开源库

作者@G军仔整理了一份旨在帮助 Android 初学者快速入门以及找到适合自己学习的资料, GitHub 地址:Android_Data ,@李锦发 之前也整理了iOS版, GitHub 地址:trip-to-iOS.

Injection for Xcode:成吨的提高开发效率

@没故事的卓同学强烈推荐一个Xcode高端必备插件:Injection Plugin for Xcode.不用重新启动应用就可以让修改的代码生效。更多好玩的功能,点击这里

盘点分析 Android N 的新特性

Android N 预览版来啦!支持 Java8 了,支持多窗口了,支持更多新特性了! @代码家连夜写了一篇从开发者角度解析 Andorid N 的文章,感兴趣点击这里.

Android界面性能调优手册

界面是 Android 应用中直接影响用户体验最关键的部分。如果代码实现得不好,界面容易发生卡顿且导致应用占用大量内存。@Vince蔡培培 整理了自己的经验和分享,详情请点击这里

Android APK终极瘦身21招

@移动开发前线分享。

作者@冯建V前不久写过一篇《APK瘦身实践》,在公司的要求下,将6.5M的Apk硬生生的减到不到4M(已开启minifyEnabled等常规压缩手段),后面他根据反馈又整理出这篇Apk瘦身指南,对Android开发者更具指导意义。

文章传送门.

ZFPlayer视频播放器 源码

@任子丰写的视频播放器——ZFPlayer,基于AVPlayer,支持横屏、竖屏(全屏播放还可锁定屏幕方向),上下滑动调节音量、屏幕亮度,左右滑动调节播放进度等等,ZFPlayer荣登当日github排行榜。Github 地址:https://github.com/renzifeng/ZFPlayer

WaveLoadingView – 圆形波浪进度指示器类

开发者@潜艇_刘智艺Zzz 将 WaveLoadingView 圆形波浪进度指示器开源在Github 上,配置参数丰富点击这里查看。

JSPatch – APP 动态更新服务平台

@bang 分享的JSPatch 平台,现在开放注册。可以实时修复 iOS App 线上 bug,一键让你的 APP 拥有动态运营能力。地址见:http://jspatch.com/ .

 BugHD for JavaScript – 轻松收集前端 Error

从收集 APP 崩溃信息到全面收集网站出现的 Error, BugHD 变得更加强大。前端 er 们不用再面对 一堆 Bug 愁容满面,可以来这里看看。

Admire.so – 一个设计资源导航网站

Admire.so 钦慕网,是一个设计资源导航网站,还有一些前端er 会用到的资源。每天会添加一个新的链接,为你的创意、你的设计多一些灵感。

_
这期的 fir.im Weekly 就到这里,欢迎大家分享更多的资


Orange Pi One Board Quick Start Guide with Armbian Debian based Linux Distribution Read more: http://www.cnx-software.com/2016/03/16/orange-pi-one-board-quick-start-guide-with-armbian-debian-based-linux-distribution/#ixzz435iNNggG

$
0
0

Orange Pi One board is the most cost-effective development board available on the market today, so I decided to purchase one sample on Aliexpress to try out the firmware, which has not always been perfect simply because Shenzhen Xunlong focuses on hardware design and manufacturing, and spends little time on software development to keep costs low, so the latter mostly relies on the community. Recently, armbian has become popular operating systems for Linux ARM platform in recent months, so I’ve decided to write a getting started guide for Orange Pi One using a Debian Desktop image released by armbian community.

Orange Pi One Unboxing

But let’s start by checking out what I received. The Orange Pi One board is kept in an anti-static bag, and comes with a Regulatory Compliance and Safety Information sheet, but no guide, as instead the company simply asks users to visit http://www.orangepi.org to access information to use their boards.

Click to Enlarge

The top of the board have the most interesting bits with Ethernet, micro USB and USB ports, HDMI port, micro SD slot, power jack, a power button, the 40-pin “Raspberry Pi” compatible header, Allwinner H3 processor and one Samsung RAM chip. The 3-pin serial console header can be found right next (under in the pic) to the RJ45 jack.

Click to Enlarge

The bottom of the board features another Samsung RAM chip (512MB in total), and the camera interface.

Click to Enlarge

I’ve also taken a picture to compare Orange Pi One dimensions to the ones of Orange Pi 2 mini, Raspberry Pi 2, and Raspberry Pi Zero.

Click to Enlarge

By the way, while the official prices for Raspberry Pi ($5), Orange Pi One ($9.99), and C.H.I.P ($9) are a little different, I ended up paying about the same for all three boards once shipping is included: £9.04 (or about $12.77) for Raspberry Pi Zero, $13.38 for Orange Pi One, and $14.22 for C.H.I.P (Cyber Monday deal for “$8”). C.H.I.P computer is not shown in the picture above simply because I have not received it yet. The performance of Orange Pi One will be much greater than the other thanks to its quad core processor as discussed on Raspberry Pi Zero, C.H.I.P and Orange Pi One comparison.

Installing and Setting Up Armbian on Orange Pi One

While the company claims your can download firmware on Orange Pi Download page, they have not published a firmware image specifically for Orange Pi One, and while you could probably use an Orange Pi PC image (this may mess up with regulator), I’ve never heard anyone ever praise Shenzhen Xunlong for the quality of the images they’ve released, quite the contrary.  While Orange Pi community member Loboris released several images for Allwinner H3 boards, he does not seem to have updated them for Orange Pi One, and I’ve heard a lot about armbian distribution recently based on Debian and targeting ARM Linux boards, so that’s the image I’m going to try.

You can currently download Debian Jessie server or desktop based on Linux 3.4 legacy kernel, but once the Ethernet driver gets into Linux mainline (aka Vanilla), you’ll be able to run the latest Linux mainline on Orange Pi One, at least for headless operation.

First you’ll need to get yourself a 8GB or greater micro SD card preferably with good performance (Class 10 or better), and use a Windows, Mac OS and Linux computer to download and flash the firmware image.

I’ve done so in Ubuntu 14.04. Once you insert the micro SD card into the computer, you may want to located the SD card with lsblk:

I used a 32GB class 10 micro SD card, and in my case the device is /dev/sdb. I’m going to use the command line, but you can use ImageWriter for Ubuntu or Windows, as well as some other tools for Mac OS. Let’s download the firmware, extract it, and flash it to the micro SD card (replace /dev/sdX by your own device):

Now insert the micro SD card into Orange Pi One, and connect all necessary cables and accessories. I connected HDMI and Ethernet cables, a RF dongle for an air mouse, a USB OTG adapter for a USB flash drive, the serial debug board, and the power supply. Please note that the micro USB port cannot be used to power the board, so you’ll either need to purchase the power adapter, or an inexpensive USB to 4.0/1.7mm power jack adapter to use with a 5V/2A USB poweradapter.

Orange_Pi_One_Power_Supply_Connections

As you connect the power supply, the red LED should lit, and after a few seconds, you should see the kernel log on the HDMI TV or monitor. I also access the serial console via a UART debug board, but it will only show the very beginning, and once the framebuffer is setup most message are redirected to the monitor. This is what I got for the first boot in the serial console:

But I got many error messages on the TV reading “[cpu_freq] ERR: set cpu frequency top 1296MHz failed!”. Those are actually normal because a single firmware image is used for all Orange Pi Allwinner H3 boards, and they use different regulators. The message will disappear subsequently once the system will have detected an Orange Pi One.

Orange_Pi_One_cpu_freq_Error_MessageYou may have to be patient the first few minutes of the very first boot (2 to 3 minutes) as you see the error messages above looping seemingly forever, as the system is resizing the root file system partition, creating a 128Mb emergency swap area, creating the SSH key, and updating some packages. Once this is all done, the system will reboot, and you’ll be asked to change the root password, create a new user, and adjust the resolution with h3disp utility which will automatically patch script.bin file in the FAT32 boot partition of your micro SD card. The default credentials are root with password 1234.

Welcome screen and new user creation after changing root password

H3Disp options

H3disp utility allows you to choosen the resolution and refresh rate of your system, and I select 1080p50, and rebooted the board one last time, and after about 20 seconds, I could get to the Debian XFCE desktop.

Click for Original Size

The resolution of the desktop is indeed 1920×1080, Ethernet is working, but my keyboard layout does not match as the default layout is for Slovenian language. I went to Settings->Keyboard to change that.

Orange_Pi_One_layout

And it seemed to work randomly as I sometimes got a QWERTY keyboard, but other times it would revert to a QWERTZ keyboard, and I’m not sure why. Following the instructions on armbian documentation using:

did not completely solve my issue either at first, but it seems to be fine now…

I’ve also noticed some permissions issues starting with the network which requires sudo for ping and iperf, and likely to CONFIG_ANDROID_PARANOID setting in the kernel configuration. My USB flash drive was also not automatically mounted, and I had to use the sudo to mount the drive manually too.

Most people will also likely need to change the timezone with:

Orange_Pi_One_Terminal

Let’s check some parameters with the command line:

The system is running sunxi Linux 3.4.110 kernel, and Debian 8. The processor max frequency is set to 1.2 GHz as it should be, the GPIOs appear to be supported just like in Orange Pi 2 mini (but less I/Os are shown), total RAM is 494MB, and 2.1GB is used out of the 29GB root partition in the micro SD card. I know some ARM boards can’t be powered off properly, but it’s not the case with Orange Pi One as I could turn it off cleanly with the power LED turning off at the end of the shutdown process.

That’s all for this guide, and I’ll showcase 3D graphics and video hardware decoding in a separate post. You can get further by checking out Armbian Orange Pi One page, following the instructions to build your own Armbian image, and browsing Orange Pi One thread in armbian forums.

Read more: http://www.cnx-software.com/2016/03/16/orange-pi-one-board-quick-start-guide-with-armbian-debian-based-linux-distribution/#ixzz435i7LyeN


一步一步实现iOS微信自动抢红包(非越狱)

$
0
0

一步一步实现iOS微信自动抢红包(非越狱)

字数2219 阅读17396 评论128
微信红包

前言:最近笔者在研究iOS逆向工程,顺便拿微信来练手,在非越狱手机上实现了微信自动抢红包的功能。

题外话:此教程是一篇严肃的学术探讨类文章,仅仅用于学习研究,也请读者不要用于商业或其他非法途径上,笔者一概不负责哟~~

好了,接下来可以进入正题了!

此教程所需要的工具/文件


是的,想要实现在非越狱iPhone上达到自动抢红包的目的,工具用的可能是有点多(工欲善其事必先利其器^_^)。不过,没关系,大家可以按照教程的步骤一步一步来执行,不清楚的步骤可以重复实验,毕竟天上不会掉馅饼嘛。

解密微信可执行文件(Mach-O)


因为从Appstore下载安装的应用都是加密过的,所以我们需要用一些工具来为下载的App解密,俗称砸壳。这样才能便于后面分析App的代码结构。

首先我们需要一台已经越狱的iPhone手机(现在市面上越狱已经很成熟,具体越狱方法这里就不介绍了)。然后进入Cydia,安装OpenSSHCycriptiFile(调试程序时可以方便地查看日志文件)这三款软件。

PS:笔者的手机是iPhone 6Plus,系统版本为iOS9.1。

在电脑上用iTunes上下载一个最新的微信,笔者当时下载的微信版本为6.3.13。下载完后,iTunes上会显示出已下载的app。

iTunes

连上iPhone,用iTunes装上刚刚下载的微信应用。

打开Mac的终端,用ssh进入连上的iPhone(确保iPhone和Mac在同一个网段,笔者iPhone的IP地址为192.168.8.54)。OpenSSH的root密码默认为alpine

ssh

接下来就是需要找到微信的Bundle id了,,这里笔者有一个小技巧,我们可以把iPhone上的所有App都关掉,唯独保留微信,然后输入命令 ps -e

微信bundle id

这样我们就找到了微信的可执行文件Wechat的具体路径了。接下来我们需要用Cycript找出微信的Documents的路径,输入命令cycript -p WeChat

cycript
  • 编译dumpdecrypted
    先记下刚刚我们获取到的两个路径(Bundle和Documents),这时候我们就要开始用dumpdecrypted来为微信二进制文件(WeChat)砸壳了。
    确保我们从Github上下载了最新的dumpdecrypted源码,进入dumpdecrypted源码的目录,编译dumpdecrypted.dylib,命令如下:
dumpdecrypted.dylib

这样我们可以看到dumpdecrypted目录下生成了一个dumpdecrypted.dylib的文件。

  • scp
    拷贝dumpdecrypted.dylib到iPhone上,这里我们用到scp命令.
    scp 源文件路径 目标文件路径 。具体如下:
scp
  • 开始砸壳
    dumpdecrypted.dylib的具体用法是:DYLD_INSERT_LIBRARIES=/PathFrom/dumpdecrypted.dylib /PathTo
dumpdecrypted

这样就代表砸壳成功了,当前目录下会生成砸壳后的文件,即WeChat.decrypted。同样用scp命令把WeChat.decrypted文件拷贝到电脑上,接下来我们要正式的dump微信的可执行文件了。

dump微信可执行文件


  • 从Github上下载最新的class-dump源代码,然后用Xcode编译即可生成class-dump(这里比较简单,笔者就不详细说明了)。
  • 导出微信的头文件
    使用class-dump命令,把刚刚砸壳后的WeChat.decrypted,导出其中的头文件。./class-dump -s -S -H ./WeChat.decrypted -o ./header6.3-arm64
导出的头文件

这里我们可以新建一个Xcode项目,把刚刚导出的头文件加到新建的项目中,这样便于查找微信的相关代码。

微信的头文件

找到CMessageMgr.hWCRedEnvelopesLogicMgr.h这两文件,其中我们注意到有这两个方法:- (void)AsyncOnAddMsg:(id)arg1 MsgWrap:(id)arg2;- (void)OpenRedEnvelopesRequest:(id)arg1;。没错,接下来我们就是要利用这两个方法来实现微信自动抢红包功能。其实现原理是,通过hook微信的新消息函数,我们判断是否为红包消息,如果是,我们就调用微信的打开红包方法。这样就能达到自动抢红包的目的了。哈哈,是不是很简单,我们一起来看看具体是怎么实现的吧。

  • 新建一个dylib工程,因为Xcode默认不支持生成dylib,所以我们需要下载iOSOpenDev,安装完成后(Xcode7环境会提示安装iOSOpenDev失败,请参考iOSOpenDev安装问题),重新打开Xcode,在新建项目的选项中即可看到iOSOpenDev选项了。
iOSOpenDev
  • dylib代码
    选择Cocoa Touch Library,这样我们就新建了一个dylib工程了,我们命名为autoGetRedEnv。

    删除autoGetRedEnv.h文件,修改autoGetRedEnv.m为autoGetRedEnv.mm,然后在项目中加入CaptainHook.h

    因为微信不会主动来加载我们的hook代码,所以我们需要把hook逻辑写到构造函数中。

    __attribute__((constructor)) static void entry()
    {
      //具体hook方法
    }

    hook微信的AsyncOnAddMsg: MsgWrap:方法,实现方法如下:

    //声明CMessageMgr类
    CHDeclareClass(CMessageMgr);
    CHMethod(2, void, CMessageMgr, AsyncOnAddMsg, id, arg1, MsgWrap, id, arg2)
    {
      //调用原来的AsyncOnAddMsg:MsgWrap:方法
      CHSuper(2, CMessageMgr, AsyncOnAddMsg, arg1, MsgWrap, arg2);
      //具体抢红包逻辑
      //...
      //调用原生的打开红包的方法
      //注意这里必须为给objc_msgSend的第三个参数声明为NSMutableDictionary,不然调用objc_msgSend时,不会触发打开红包的方法
      ((void (*)(id, SEL, NSMutableDictionary*))objc_msgSend)(logicMgr, @selector(OpenRedEnvelopesRequest:), params);
    }
    __attribute__((constructor)) static void entry()
    {
      //加载CMessageMgr类
      CHLoadLateClass(CMessageMgr);
      //hook AsyncOnAddMsg:MsgWrap:方法
      CHClassHook(2, CMessageMgr, AsyncOnAddMsg, MsgWrap);
    }

    项目的全部代码,笔者已放入Github中。

    完成好具体实现逻辑后,就可以顺利生成dylib了。

重新打包微信App


  • 为微信可执行文件注入dylib
    要想微信应用运行后,能执行我们的代码,首先需要微信加入我们的dylib,这里我们用到一个dylib注入神器:yololib,从网上下载源代码,编译后得到yololib。

    使用yololib简单的执行下面一句就可以成功完成注入。注入之前我们先把之前保存的WeChat.decrypted重命名为WeChat,即已砸完壳的可执行文件。
    ./yololib 目标可执行文件 需注入的dylib
    注入成功后即可见到如下信息:

    dylib注入
  • 新建Entitlements.plist
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
      <key>application-identifier</key>
      <string>123456.com.autogetredenv.demo</string>
      <key>com.apple.developer.team-identifier</key>
      <string>123456</string>
      <key>get-task-allow</key>
      <true/>
      <key>keychain-access-groups</key>
      <array>
          <string>123456.com.autogetredenv.demo</string>
      </array>
    </dict>
    </plist>

    这里大家也许不清楚自己的证书Teamid及其他信息,没关系,笔者这里有一个小窍门,大家可以找到之前用开发者证书或企业证书打包过的App(例如叫Demo),然后在终端中输入以下命令即可找到相关信息,命令如下:
    ./ldid -e ./Demo.app/demo

  • 给微信重新签名
    接下来把我们生成的dylib(libautoGetRedEnv.dylib)、刚刚注入dylib的WeChat、以及embedded.mobileprovision文件(可以在之前打包过的App中找到)拷贝到WeChat.app中。

    命令格式:codesign -f -s 证书名字 目标文件

    PS:证书名字可以在钥匙串中找到

    分别用codesign命令来为微信中的相关文件签名,具体实现如下:

    重新签名
  • 打包成ipa
    给微信重新签名后,我们就可以用xcrun来生成ipa了,具体实现如下:
    xcrun -sdk iphoneos PackageApplication -v WeChat.app -o ~/WeChat.ipa

安装拥有抢红包功能的微信


以上步骤如果都成功实现的话,那么真的就是万事俱备,只欠东风了~~~

我们可以使用iTools工具,来为iPhone(此iPhone Device id需加入证书中)安装改良过的微信了。

iTools

大工告成!!


好了,我们可以看看hook过的微信抢红包效果了~

自动抢红包

哈哈,是不是觉得很爽啊,”妈妈再也不用担心我抢红包了。”。大家如果有兴趣可以继续hook微信的其他函数,这样既加强了学习,又满足了自己的特(zhuang)殊(bi)需求嘛。

教程中所涉及到的工具及源代码笔者都上传到Github上。
Github地址

特别鸣谢:
1.iOS冰与火之歌(作者:蒸米)
2.iOS应用逆向工程


Viewing all 759 articles
Browse latest View live