当前位置:   article > 正文

TensorFlow的训练模型在Android和Java的应用及调用_duplicate class org.tensorflow.contrib.android.ten

duplicate class org.tensorflow.contrib.android.tensorflowinferenceinterface$


遵循:BY-SA

署名-相同方式共享 4.0协议

作者:谭东

时间:2017年5月29日

环境:Windows 7

当我们开始学习编程的时候,第一件事往往是学习打印"Hello World"。就好比编程入门有Hello World,机器学习入门有MNIST

MNIST是一个入门级的计算机视觉数据集,它包含各种手写数字图片:


它也包含每一张图片对应的标签,告诉我们这个是数字几。比如,上面这四张图片的标签分别是5,0,4,1。

那我我们就将TensorFlow里的一个训练后的模型数据集,在Android里实现调用使用。

Tensorflow训练模型通常使用Python api编写,训练模型保存为二进制pb文件,内含数据集。

https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip 这个是google给出的一个图像识别的训练模型集,供测试。

里面有2个文件:


第一个txt文件展示了这个pb训练模型可以识别的东西有哪些。


第二个pb文件为训练模型数据集,有51.3M大小。


那么我们接下来就是在Android或Java里调用API使用他这个训练模型,实现图像识别功能。

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android  这个是TensorFlow官方的Demo源码。

Android想要使用要编译so,毕竟是跨平台调用。

jni在官方Demo里也附带了。

Android和TensorFlow调用API的aar库可以在gradle里引用:

compile 'org.tensorflow:tensorflow-android:+'
基本结构:
基本API调用训练模型如下代码类似:
  1. TensorFlowInferenceInterface tfi = new TensorFlowInferenceInterface("F:/tf_mode/output_graph.pb","imageType");
  2. final Operation operation = tfi.graphOperation("y_conv_add");
  3. Output output = operation.output(0);
  4. Shape shape = output.shape();
  5. final int numClasses = (int) shape.size(1);
主要的类就是TensorFlowInferenceInterfaceOperation。
那么接下来把官方Demo的这个类调用给出:
他这个是Android的Assets目录读取训练模型, 从
c.inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFilename);这句可以看出。
那么我们可以根据实际训练模型pb文件的位置进行修改引用。
  1. /* Copyright 2016 The TensorFlow Authors. All Rights Reserved.
  2. Licensed under the Apache License, Version 2.0 (the "License");
  3. you may not use this file except in compliance with the License.
  4. You may obtain a copy of the License at
  5. http://www.apache.org/licenses/LICENSE-2.0
  6. Unless required by applicable law or agreed to in writing, software
  7. distributed under the License is distributed on an "AS IS" BASIS,
  8. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  9. See the License for the specific language governing permissions and
  10. limitations under the License.
  11. ==============================================================================*/
  12. package org.tensorflow.demo;
  13. import android.content.res.AssetManager;
  14. import android.graphics.Bitmap;
  15. import android.os.Trace;
  16. import android.util.Log;
  17. import java.io.BufferedReader;
  18. import java.io.IOException;
  19. import java.io.InputStreamReader;
  20. import java.util.ArrayList;
  21. import java.util.Comparator;
  22. import java.util.List;
  23. import java.util.PriorityQueue;
  24. import java.util.Vector;
  25. import org.tensorflow.Operation;
  26. import org.tensorflow.contrib.android.TensorFlowInferenceInterface;
  27. /** A classifier specialized to label images using TensorFlow. */
  28. public class TensorFlowImageClassifier implements Classifier {
  29. private static final String TAG = "TensorFlowImageClassifier";
  30. // Only return this many results with at least this confidence.
  31. private static final int MAX_RESULTS = 3;
  32. private static final float THRESHOLD = 0.1f;
  33. // Config values.
  34. private String inputName;
  35. private String outputName;
  36. private int inputSize;
  37. private int imageMean;
  38. private float imageStd;
  39. // Pre-allocated buffers.
  40. private Vector<String> labels = new Vector<String>();
  41. private int[] intValues;
  42. private float[] floatValues;
  43. private float[] outputs;
  44. private String[] outputNames;
  45. private boolean logStats = false;
  46. private TensorFlowInferenceInterface inferenceInterface;
  47. private TensorFlowImageClassifier() {}
  48. /**
  49. * Initializes a native TensorFlow session for classifying images.
  50. *
  51. * @param assetManager The asset manager to be used to load assets.
  52. * @param modelFilename The filepath of the model GraphDef protocol buffer.
  53. * @param labelFilename The filepath of label file for classes.
  54. * @param inputSize The input size. A square image of inputSize x inputSize is assumed.
  55. * @param imageMean The assumed mean of the image values.
  56. * @param imageStd The assumed std of the image values.
  57. * @param inputName The label of the image input node.
  58. * @param outputName The label of the output node.
  59. * @throws IOException
  60. */
  61. public static Classifier create(
  62. AssetManager assetManager,
  63. String modelFilename,
  64. String labelFilename,
  65. int inputSize,
  66. int imageMean,
  67. float imageStd,
  68. String inputName,
  69. String outputName) {
  70. TensorFlowImageClassifier c = new TensorFlowImageClassifier();
  71. c.inputName = inputName;
  72. c.outputName = outputName;
  73. // Read the label names into memory.
  74. // TODO(andrewharp): make this handle non-assets.
  75. String actualFilename = labelFilename.split("file:///android_asset/")[1];
  76. Log.i(TAG, "Reading labels from: " + actualFilename);
  77. BufferedReader br = null;
  78. try {
  79. br = new BufferedReader(new InputStreamReader(assetManager.open(actualFilename)));
  80. String line;
  81. while ((line = br.readLine()) != null) {
  82. c.labels.add(line);
  83. }
  84. br.close();
  85. } catch (IOException e) {
  86. throw new RuntimeException("Problem reading label file!" , e);
  87. }
  88. c.inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFilename);
  89. // The shape of the output is [N, NUM_CLASSES], where N is the batch size.
  90. final Operation operation = c.inferenceInterface.graphOperation(outputName);
  91. final int numClasses = (int) operation.output(0).shape().size(1);
  92. Log.i(TAG, "Read " + c.labels.size() + " labels, output layer size is " + numClasses);
  93. // Ideally, inputSize could have been retrieved from the shape of the input operation. Alas,
  94. // the placeholder node for input in the graphdef typically used does not specify a shape, so it
  95. // must be passed in as a parameter.
  96. c.inputSize = inputSize;
  97. c.imageMean = imageMean;
  98. c.imageStd = imageStd;
  99. // Pre-allocate buffers.
  100. c.outputNames = new String[] {outputName};
  101. c.intValues = new int[inputSize * inputSize];
  102. c.floatValues = new float[inputSize * inputSize * 3];
  103. c.outputs = new float[numClasses];
  104. return c;
  105. }
  106. @Override
  107. public List<Recognition> recognizeImage(final Bitmap bitmap) {
  108. // Log this method so that it can be analyzed with systrace.
  109. Trace.beginSection("recognizeImage");
  110. Trace.beginSection("preprocessBitmap");
  111. // Preprocess the image data from 0-255 int to normalized float based
  112. // on the provided parameters.
  113. bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
  114. for (int i = 0; i < intValues.length; ++i) {
  115. final int val = intValues[i];
  116. floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - imageMean) / imageStd;
  117. floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - imageMean) / imageStd;
  118. floatValues[i * 3 + 2] = ((val & 0xFF) - imageMean) / imageStd;
  119. }
  120. Trace.endSection();
  121. // Copy the input data into TensorFlow.
  122. Trace.beginSection("feed");
  123. inferenceInterface.feed(inputName, floatValues, 1, inputSize, inputSize, 3);
  124. Trace.endSection();
  125. // Run the inference call.
  126. Trace.beginSection("run");
  127. inferenceInterface.run(outputNames, logStats);
  128. Trace.endSection();
  129. // Copy the output Tensor back into the output array.
  130. Trace.beginSection("fetch");
  131. inferenceInterface.fetch(outputName, outputs);
  132. Trace.endSection();
  133. // Find the best classifications.
  134. PriorityQueue<Recognition> pq =
  135. new PriorityQueue<Recognition>(
  136. 3,
  137. new Comparator<Recognition>() {
  138. @Override
  139. public int compare(Recognition lhs, Recognition rhs) {
  140. // Intentionally reversed to put high confidence at the head of the queue.
  141. return Float.compare(rhs.getConfidence(), lhs.getConfidence());
  142. }
  143. });
  144. for (int i = 0; i < outputs.length; ++i) {
  145. if (outputs[i] > THRESHOLD) {
  146. pq.add(
  147. new Recognition(
  148. "" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i], null));
  149. }
  150. }
  151. final ArrayList<Recognition> recognitions = new ArrayList<Recognition>();
  152. int recognitionsSize = Math.min(pq.size(), MAX_RESULTS);
  153. for (int i = 0; i < recognitionsSize; ++i) {
  154. recognitions.add(pq.poll());
  155. }
  156. Trace.endSection(); // "recognizeImage"
  157. return recognitions;
  158. }
  159. @Override
  160. public void enableStatLogging(boolean logStats) {
  161. this.logStats = logStats;
  162. }
  163. @Override
  164. public String getStatString() {
  165. return inferenceInterface.getStatString();
  166. }
  167. @Override
  168. public void close() {
  169. inferenceInterface.close();
  170. }
  171. }
新版本的api改了下,那我给出旧版本的Android Studio版本的Demo。
https://github.com/Nilhcem/tensorflow-classifier-android
这个是国外的一个开发者编译好so库的一个旧的Demo调用版本。大家可以参考下,和新版使用方法大同小异。




声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Cpp五条/article/detail/150180
推荐阅读
相关标签
  

闽ICP备14008679号