博客
关于我
tf.map_fn
阅读量:700 次
发布时间:2019-03-17

本文共 4417 字,大约阅读时间需要 14 分钟。

map on the list of tensors unpacked from elems on dimension 0.

tf.map_fn(    fn,    elems,    dtype=None,    parallel_iterations=None,    back_prop=True,    swap_memory=False,    infer_shape=True,    name=None)

The simplest version of map_fn repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems. dtype is the data type of the return value of fn. Users must provide dtype if it is different from the data type of elems.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is [values.shape[0]] + fn(values[0]).shape.

This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):.

Furthermore, fn may emit a different structure than its input. For example, fn may look like: fn = lambda t1: return (t1 + 1, t1 - 1). In this case, the dtype parameter is not optional: dtype must be a type or (possibly nested) tuple of types matching the output of fn.

To apply a functional operation to the nonzero elements of a SparseTensor one of the following methods is recommended. First, if the function is expressible as TensorFlow ops, use

  result = SparseTensor(input.indices, fn(input.values), input.dense_shape)

If, however, the function is not expressible as a TensorFlow op, then use

result = SparseTensor(  input.indices, map_fn(fn, input.values), input.dense_shape)

instead.

When executing eagerly, map_fn does not execute in parallel even if parallel_iterations is set to a value > 1. You can still get the performance benefits of running a function in parallel by using the tf.contrib.eager.defun decorator,

# Assume the function being used in map_fn is fn.# To ensure map_fn calls fn in parallel, use the defun decorator.@tf.contrib.eager.defundef func(tensor):  return tf.map_fn(fn, tensor)

Note that if you use the defun decorator, any non-TensorFlow Python code that you may have written in your function won't get executed. See tf.contrib.eager.defun for more details. The recommendation would be to debug without defun but switch to defun to get performance benefits of running map_fn in parallel.

Args:

  • fn: The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure as elems. Its output must have the same structure as dtype if one is provided, otherwise it must have the same structure as elems.
  • elems: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be applied to fn.
  • dtype: (optional) The output type(s) of fn. If fn returns a structure of Tensors differing from the structure of elems, then dtype is not optional and must have the same structure as the output of fn.
  • parallel_iterations: (optional) The number of iterations allowed to run in parallel. When graph building, the default value is 10. While executing eagerly, the default value is set to 1.
  • back_prop: (optional) True enables support for back propagation.
  • swap_memory: (optional) True enables GPU-CPU memory swapping.
  • infer_shape: (optional) False disables tests for consistent output shapes.
  • name: (optional) Name prefix for the returned tensors.

Returns:

  • A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, from first to last.

Raises:

  • TypeError: if fn is not callable or the structure of the output of fn and dtype do not match, or if elems is a SparseTensor.
  • ValueError: if the lengths of the output of fn and dtype do not match.

Examples:

elems = np.array([1, 2, 3, 4, 5, 6])squares = map_fn(lambda x: x * x, elems)# squares == [1, 4, 9, 16, 25, 36]
elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64)# alternate == [-1, 2, -3]
elems = np.array([1, 2, 3])alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64))# alternates[0] == [1, 2, 3]# alternates[1] == [-1, -2, -3]

 

 

承接Matlab、Python和C++的编程,机器学习、计算机视觉的理论实现及辅导,本科和硕士的均可,咸鱼交易,专业回答请走知乎,详谈请联系QQ号757160542,非诚勿扰。

 

转载地址:http://qwqhz.baihongyu.com/

你可能感兴趣的文章
nginx配置域名和ip同时访问、开放多端口
查看>>
Nginx配置好ssl,但$_SERVER[‘HTTPS‘]取不到值
查看>>
Nginx配置如何一键生成
查看>>
Nginx配置实例-负载均衡实例:平均访问多台服务器
查看>>
Nginx配置文件nginx.conf中文详解(总结)
查看>>
Nginx配置负载均衡到后台网关集群
查看>>
ngrok | 内网穿透,支持 HTTPS、国内访问、静态域名
查看>>
NHibernate学习[1]
查看>>
NHibernate异常:No persister for的解决办法
查看>>
NIFI1.21.0_Mysql到Mysql增量CDC同步中_日期类型_以及null数据同步处理补充---大数据之Nifi工作笔记0057
查看>>
NIFI1.21.0_NIFI和hadoop蹦了_200G集群磁盘又满了_Jps看不到进程了_Unable to write in /tmp. Aborting----大数据之Nifi工作笔记0052
查看>>
NIFI1.21.0通过Postgresql11的CDC逻辑复制槽实现_指定表多表增量同步_增删改数据分发及删除数据实时同步_通过分页解决变更记录过大问题_02----大数据之Nifi工作笔记0054
查看>>
NIFI1.23.2_最新版_性能优化通用_技巧积累_使用NIFI表达式过滤表_随时更新---大数据之Nifi工作笔记0063
查看>>
NIFI从MySql中增量同步数据_通过Mysql的binlog功能_实时同步mysql数据_根据binlog实现数据实时delete同步_实际操作04---大数据之Nifi工作笔记0043
查看>>
NIFI从MySql中增量同步数据_通过Mysql的binlog功能_实时同步mysql数据_配置binlog_使用处理器抓取binlog数据_实际操作01---大数据之Nifi工作笔记0040
查看>>
NIFI从MySql中增量同步数据_通过Mysql的binlog功能_实时同步mysql数据_配置数据路由_实现数据插入数据到目标数据库_实际操作03---大数据之Nifi工作笔记0042
查看>>
NIFI从MySql中离线读取数据再导入到MySql中_03_来吧用NIFI实现_数据分页获取功能---大数据之Nifi工作笔记0038
查看>>
NIFI从MySql中离线读取数据再导入到MySql中_无分页功能_02_转换数据_分割数据_提取JSON数据_替换拼接SQL_添加分页---大数据之Nifi工作笔记0037
查看>>
NIFI从PostGresql中离线读取数据再导入到MySql中_带有数据分页获取功能_不带分页不能用_NIFI资料太少了---大数据之Nifi工作笔记0039
查看>>
nifi使用过程-常见问题-以及入门总结---大数据之Nifi工作笔记0012
查看>>