Prospective students on the "Node.js Developer" course are invited to sign up for an open lesson on the topic "Dockerization of node.js applications" .
For now, let's share the traditional translation of the article.
, Rust Node.js. , AIaaS (. Artificial Intelligence as a Service — « ») Node.js.
Python, - JavaScript. , JavaScript, Node.js.
Python, JavaScript - . , , . Python C/C++-. Node.js, — WebAssembly.
WebAssembly Node.js JavaScript-. , , . WebAssembly .
AIaaS- Node.js .
Node.js -
WebAssembly
, .,
WebAssembly
. Rust. ., . .
WebAssembly
— , Python Node.js .
.
- .
Rust MTCNN Cetra: Tensorflow Rust. Tensorflow WebAssembly, - .
Node.js .
app.post('/infer', function (req, res) {
let image_file = req.files.image_file;
var result_filename = uuidv4() + ".png";
// Call the infer() function from WebAssembly (SSVM)
var res = infer(req.body.detection_threshold, image_file.data);
fs.writeFileSync("public/" + result_filename, res);
res.send('<img src="' + result_filename + '"/>');
});
, JavaScript- infer()
detection_threshold
, , , . infer()
Rust WebAssembly, JavaScript.
infer()
. TensorFlow . TensorFlow — , . infer()
PNG -.
#[wasm_bindgen]
pub fn infer(detection_threshold: &str, image_data: &[u8]) -> Vec<u8> {
let mut dt = detection_threshold;
... ...
let mut img = image::load_from_memory(image_data).unwrap();
// Run the tensorflow model using the face_detection_mtcnn native wrapper
let mut cmd = Command::new("face_detection_mtcnn");
// Pass in some arguments
cmd.arg(img.width().to_string())
.arg(img.height().to_string())
.arg(dt);
// The image bytes data is passed in via STDIN
for (_x, _y, rgb) in img.pixels() {
cmd.stdin_u8(rgb[2] as u8)
.stdin_u8(rgb[1] as u8)
.stdin_u8(rgb[0] as u8);
}
let out = cmd.output();
// Draw boxes from the result JSON array
let line = Pixel::from_slice(&[0, 255, 0, 0]);
let stdout_json: Value = from_str(str::from_utf8(&out.stdout).expect("[]")).unwrap();
let stdout_vec = stdout_json.as_array().unwrap();
for i in 0..stdout_vec.len() {
let xy = stdout_vec[i].as_array().unwrap();
let x1: i32 = xy[0].as_f64().unwrap() as i32;
let y1: i32 = xy[1].as_f64().unwrap() as i32;
let x2: i32 = xy[2].as_f64().unwrap() as i32;
let y2: i32 = xy[3].as_f64().unwrap() as i32;
let rect = Rect::at(x1, y1).of_size((x2 - x1) as u32, (y2 - y1) as u32);
draw_hollow_rect_mut(&mut img, rect, *line);
}
let mut buf = Vec::new();
// Write the result image into STDOUT
img.write_to(&mut buf, image::ImageOutputFormat::Png).expect("Unable to write");
return buf;
}
face_detection_mtcnn
TensorFlow . : , . RGB infer()
WebAssembly STDIN. JSON
STDOUT
.
, detectiont hreshold
minsize, input
. box
.
fn main() -> Result<(), Box<dyn Error>> {
// Get the arguments passed in from WebAssembly
let args: Vec<String> = env::args().collect();
let img_width: u64 = args[1].parse::<u64>().unwrap();
let img_height: u64 = args[2].parse::<u64>().unwrap();
let detection_threshold: f32 = args[3].parse::<f32>().unwrap();
let mut buffer: Vec<u8> = Vec::new();
let mut flattened: Vec<f32> = Vec::new();
// The image bytes are read from STDIN
io::stdin().read_to_end(&mut buffer)?;
for num in buffer {
flattened.push(num.into());
}
// Load up the graph as a byte array and create a tensorflow graph.
let model = include_bytes!("mtcnn.pb");
let mut graph = Graph::new();
graph.import_graph_def(&*model, &ImportGraphDefOptions::new())?;
let mut args = SessionRunArgs::new();
// The `input` tensor expects BGR pixel data from the input image
let input = Tensor::new(&[img_height, img_width, 3]).with_values(&flattened)?;
args.add_feed(&graph.operation_by_name_required("input")?, 0, &input);
// The `min_size` tensor takes the detection_threshold argument
let min_size = Tensor::new(&[]).with_values(&[detection_threshold])?;
args.add_feed(&graph.operation_by_name_required("min_size")?, 0, &min_size);
// Default input params for the model
let thresholds = Tensor::new(&[3]).with_values(&[0.6f32, 0.7f32, 0.7f32])?;
args.add_feed(&graph.operation_by_name_required("thresholds")?, 0, &thresholds);
let factor = Tensor::new(&[]).with_values(&[0.709f32])?;
args.add_feed(&graph.operation_by_name_required("factor")?, 0, &factor);
// Request the following outputs after the session runs.
let bbox = args.request_fetch(&graph.operation_by_name_required("box")?, 0);
let session = Session::new(&SessionOptions::new(), &graph)?;
session.run(&mut args)?;
// Get the bounding boxes
let bbox_res: Tensor<f32> = args.fetch(bbox)?;
let mut iter = 0;
let mut json_vec: Vec<[f32; 4]> = Vec::new();
while (iter * 4) < bbox_res.len() {
json_vec.push([
bbox_res[4 * iter + 1], // x1
bbox_res[4 * iter], // y1
bbox_res[4 * iter + 3], // x2
bbox_res[4 * iter + 2], // y2
]);
iter += 1;
}
let json_obj = json!(json_vec);
// Return result JSON in STDOUT
println!("{}", json_obj.to_string());
Ok(())
}
— , .
Rust, Node.js, Second State WebAssembly VM ssvmup. Docker-. TensorFlow.
$ wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-1.15.0.tar.gz
$ sudo tar -C /usr/ -xzf libtensorflow-gpu-linux-x86_64-1.15.0.tar.gz
# in the native_model_zoo/face_detection_mtcnn directory
$ cargo install --path .
-. ssvmup WebAssembly Rust. WebAssembly -.
# in the nodejs/face_detection_service directory
$ ssvmup build
WebAssembly, - Node.js.
$ npm i express express-fileupload uuid
$ cd node
$ node server.js
- 8080 . , !
TensorFlow Model Zoo
Rust face_detection_mtcnn — TensorFlow. TensorFlow ( ), , .
. , . , WASM.
, . .
nativemodelzoo, Rust TensorFlow.
, AIaaS- Node.js Rust WebAssembly . Model Zoo (« »), -.
TensorFlow , .
"Node.js Developer". " node.js " .