记录spark-streaming-kafka-0-10_2.11的2.3.2版本StructuredStreaming水印除重操作OOM解决_拉普达男孩的博客-程序员秘密

技术标签: spark  scala  kafka  大数据  

代码主要部分:

    val df = kafkaReadStream(spark, KAFKA_INIT_OFFSETS, KAFKA_TOPIC)
      .option("maxOffsetsPerTrigger",1000)//限流:对每个触发器间隔处理的最大偏移量的速率限制。指定的偏移量总数将按比例划分到不同卷的topicPartitions上。
      .option("fetchOffset.numRetries",3)//尝试次数
      .option("failOnDataLoss",false) //数据丢失警告
      .load()
      .selectExpr("cast (value as string) as json")
      .select(from_json($"json", schema = getKafkaDNSLogSchema()).as("data"))
      //      .select("data.time","data.host","data.content")
      .select("data.content")
      .filter($"content".isNotNull)
      .map(row => {
        val content = JsonDNSDataHandler(row.getString(0))
        val date1 = CommonUtils.timeStamp2Date(content.split("\t")(0).toLong, "yyyy-MM-dd HH:mm:ss.SSSSSS")
        val timestamp = java.sql.Timestamp.valueOf(date1)

        (timestamp,content)
      }).as[(Timestamp,String)].toDF("timestamp","content")
      .withWatermark("timestamp", "10 minutes") //重复记录设置水印时间为十分钟
      .dropDuplicates("content")

    val query = df.writeStream
      .outputMode(OutputMode.Update()) //保留更新的数据
            .trigger(Trigger.ProcessingTime("2 minutes")) //默认0sThe default value is `ProcessingTime(0)` and it will run the query as fast as possible.
//      .trigger(Trigger.ProcessingTime(0))
      //      .format("console") // 输出到控制台 debug用
      .format("cn.pcl.csrc.spark.streaming.HiveSinkProvider") //自定义HiveSinkProvider
      .option("checkpointLocation", KAFKA_CHECK_POINTS)
      .start()
    query.awaitTermination()

出错日志:

21/09/09 09:24:35 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect.
21/09/09 11:02:17 WARN ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 120000 milliseconds, but spent 137183 milliseconds
21/09/09 11:06:04 WARN ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 120000 milliseconds, but spent 227294 milliseconds
21/09/09 11:12:35 WARN ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 120000 milliseconds, but spent 155791 milliseconds
21/09/09 11:18:41 WARN ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 120000 milliseconds, but spent 161555 milliseconds
21/09/09 11:19:29 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Requesting driver to remove executor 2 for reason Container marked as failed: container_e32_1631077447110_0028_01_000003 on host: hdp03.p
cl-test.com. Exit status: 143. Diagnostics: [2021-09-09 11:17:45.087]Container killed on request. Exit code is 143
[2021-09-09 11:17:45.088]Container exited with a non-zero exit code 143.
[2021-09-09 11:17:45.090]Killed by external signal

21/09/09 11:19:29 ERROR YarnScheduler: Lost executor 2 on hdp03.pcl-test.com: Container marked as failed: container_e32_1631077447110_0028_01_000003 on host: hdp03.pcl-test.com. Exit status: 143. Diagnost
ics: [2021-09-09 11:17:45.087]Container killed on request. Exit code is 143
[2021-09-09 11:17:45.088]Container exited with a non-zero exit code 143.
[2021-09-09 11:17:45.090]Killed by external signal

21/09/09 11:19:29 WARN TaskSetManager: Lost task 0.0 in stage 112.0 (TID 11256, hdp03.pcl-test.com, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Containe
r marked as failed: container_e32_1631077447110_0028_01_000003 on host: hdp03.pcl-test.com. Exit status: 143. Diagnostics: [2021-09-09 11:17:45.087]Container killed on request. Exit code is 143
[2021-09-09 11:17:45.088]Container exited with a non-zero exit code 143.
[2021-09-09 11:17:45.090]Killed by external signal

21/09/09 11:19:29 ERROR TaskSetManager: Task 0 in stage 112.0 failed 1 times; aborting job
21/09/09 11:19:29 ERROR WriteToDataSourceV2Exec: Data source writer com.hortonworks.spark.sql.hive.llap.HiveStreamingDataSourceWriter@7050cc3f is aborting.
21/09/09 11:19:29 ERROR WriteToDataSourceV2Exec: Data source writer com.hortonworks.spark.sql.hive.llap.HiveStreamingDataSourceWriter@7050cc3f aborted.
21/09/09 11:19:29 ERROR MicroBatchExecution: Query [id = 60ca7eca-727c-494c-84a8-aa542340eb53, runId = 2987ae03-058c-4c86-bc68-421083b72fab] terminated with error
org.apache.spark.SparkException: Writing job aborted.
        at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2.scala:112)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:256)
        at cn.pcl.csrc.spark.streaming.HiveSink.addBatch(HiveSink.scala:39)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$16.apply(MicroBatchExecution.scala:
475)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3.apply(MicroBatchExecution.scala:473)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:271)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:472)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:133)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:121)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:121)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:271)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:121)
        at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:117)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 112.0 failed 1 times, most recent failure: Lost task 0.0 in stage 112.0 (TID 11256, hdp03.pcl-test.com, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container marked as failed: container_e32_1631077447110_0028_01_000003 on host: hdp03.pcl-test.com. Exit status: 143. Diagnostics: [2021-09-09 11:17:45.087]Container killed on request. Exit code is 143
[2021-09-09 11:17:45.088]Container exited with a non-zero exit code 143.
[2021-09-09 11:17:45.090]Killed by external signal

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
        at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2.scala:82)
        ... 30 more
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: Writing job aborted.
=== Streaming Query ===
Identifier: [id = 60ca7eca-727c-494c-84a8-aa542340eb53, runId = 2987ae03-058c-4c86-bc68-421083b72fab]
Current Committed Offsets: {KafkaSource[Subscribe[recursive-log]]: {"recursive-log":{"0":2504047}}}
Current Available Offsets: {KafkaSource[Subscribe[recursive-log]]: {"recursive-log":{"0":2504247}}}

Current State: ACTIVE
Thread State: RUNNABLE

Logical Plan:
Deduplicate [content#39]
+- EventTimeWatermark timestamp#38: timestamp, interval 10 minutes
   +- Project [_1#32 AS timestamp#38, _2#33 AS content#39]
      +- SerializeFromObject [staticinvoke(class org.apache.spark.sql.catalyst.util.DateTimeUtils$, TimestampType, fromJavaTimestamp, assertnotnull(assertnotnull(input[0, scala.Tuple2, true]))._1, true, false) AS _1#32, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, scala.Tuple2, true]))._2, true, false) AS _2#33]
         +- MapElements <function1>, interface org.apache.spark.sql.Row, [StructField(content,StringType,true)], obj#31: scala.Tuple2
            +- DeserializeToObject createexternalrow(content#25.toString, StructField(content,StringType,true)), obj#30: org.apache.spark.sql.Row
               +- Filter isnotnull(content#25)
                  +- Project [data#23.content AS content#25]
                     +- Project [jsontostructs(StructField(time,StringType,true), StructField(host,StringType,true), StructField(content,StringType,true), json#21, Some(Asia/Shanghai), true) AS data#23]
                        +- Project [cast(value#8 as string) AS json#21]
                           +- StreamingExecutionRelation KafkaSource[Subscribe[recursive-log]], [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13]

        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Writing job aborted.
        at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2.scala:112)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:256)
        at cn.pcl.csrc.spark.streaming.HiveSink.addBatch(HiveSink.scala:39)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$16.apply(MicroBatchExecution.scala:475)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3.apply(MicroBatchExecution.scala:473)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:271)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:472)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:133)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:121)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:121)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:271)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:121)
        at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:117)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
        ... 1 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 112.0 failed 1 times, most recent failure: Lost task 0.0 in stage 112.0 (TID 11256, hdp03.pcl-test.com, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container marked as failed: container_e32_1631077447110_0028_01_000003 on host: hdp03.pcl-test.com. Exit status: 143. Diagnostics: [2021-09-09 11:17:45.087]Container killed on request. Exit code is 143
[2021-09-09 11:17:45.088]Container exited with a non-zero exit code 143.
[2021-09-09 11:17:45.090]Killed by external signal

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
        at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2.scala:82)
        ... 30 more

问题解惑链接:

问题描述

目前只是用这部分,已经解决问题。

Spark2.2(三十八):Spark Structured Streaming2.4之前版本使用agg和dropduplication消耗内存比较多的问题(Memory issue with spark structured streaming)调研
在spark中《Memory usage of state in Spark Structured Streaming》讲解Spark内存分配情况,以及提到了HDFSBackedStateStoreProvider存储多个版本的影响;从stackoverflow上也可以看到别人遇到了structured streaming中内存问题,同时也对问题做了分析《Memory issue with spark structured streaming》;另外可以从spark的官网问题修复列表中查看到如下内容:

1)在流聚合中从值中删除冗余密钥数据(Split out min retain version of state for memory in HDFSBackedStateStoreProvider)
问题描述:

HDFSBackedStateStoreProvider has only one configuration for minimum versions to retain of state which applies to both memory cache and files. As default version of "spark.sql.streaming.minBatchesToRetain" is set to high (100), which doesn't require strictly 100x of memory, but I'm seeing 10x ~ 80x of memory consumption for various workloads. In addition, in some cases, requiring 2x of memory is even unacceptable, so we should split out configuration for memory and let users adjust to trade-off memory usage vs cache miss.

In normal case, default value '2' would cover both cases: success and restoring failure with less than or around 2x of memory usage, and '1' would only cover success case but no longer require more than 1x of memory. In extreme case, user can set the value to '0' to completely disable the map cache to maximize executor memory.

修复情况:

对应官网bug情况概述《[SPARK-24717][SS] Split out max retain version of state for memory in HDFSBackedStateStoreProvider #21700》、《Split out min retain version of state for memory in HDFSBackedStateStoreProvider》

相关知识:

《Spark Structrued Streaming源码分析--(三)Aggreation聚合状态存储与更新》

HDFSBackedStateStoreProvider存储state的目录结构在该文章中介绍的,另外这些文件是一个系列,建议可以多读读,下边借用作者文章中的图展示下state存储目录结构

优化配置

配置描述

解决方式:

运行提交任务时,增加两配置

--conf spark.sql.streaming.minBatchesToRetain=3 --conf spark.sql.streaming.maxBatchesToRetainInMemory=0 

目前运行状态:

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/ITwangnengjie/article/details/120226781

智能推荐

beeline软件_Beeline_weixin_39938269的博客-程序员秘密

Beeline是一个转为优化国服游戏打造的多功能网络加速应用。可以快速解决手游网络卡顿、延迟、掉线、丢包、跳红跳蓝、加载缓慢等问题。并且有着多个线路可以选择,能智能选择线路、降低延迟、断线重连、防止卡顿,提高网络稳定性。同时还支持一些锁国区的手机应用,让你可以看国内的视频和听音乐。Beeline软件特色【国服手游,一键加速】Beeline游戏加速器支持:王者荣耀、和平精英、第五人格、阴阳师、梦幻西...

Flink on yarn_Amos_Mu的博客-程序员秘密

1.Flink on yarn执行方式和提交命令第一种:是先开辟资源然后在进行资源的调度使用,开辟的资源是供所有的flink进程来使用的,如果某一时刻没有flink程序执行开辟的资源会空转等待新的flink进程。第二种:是一边开辟资源一边进行使用,一个资源供一个flink进程使用,flink进程执行完毕之后就释放资源。 flink的提交命令: ...

Android - java native 异常捕获到本地 - xCrash 简单实现 ,亲测可直接使用_android xcrash_Rainbow Snake的博客-程序员秘密

需求:当APP出现Java异常、native异常和ANR时需要重启当前APP。解决方案:使用爱奇艺的xCrash框架进行捕获并重启。步骤一:在module的build.gradle中添加如下代码:android { defaultConfig { ndk { // 根据需要添加必要的ABI abiFilters 'armeabi', 'armeabi-v7a', 'arm...

这几个冷门但实用的 Python 技巧你还不知道?_wade1203的博客-程序员秘密

这篇文章主要和大家分享一些 Python 不一样的技巧,感受 Python 带给你的乐趣吧。1.print 打印带有颜色的信息大家知道 Python 中的信息打印函数 P...

ASP中绝对经典的几个典型SCRIPT_asp script_疾风铸境的博客-程序员秘密

1、 关闭窗口在图片的连接上写   javascript:window.close();  提交的图片连接写 javascript:document.Form1.submit();  删除之前出现确认对话框:   " ONCLICK="javascript:return confirm(真的要删除数据吗?)">删除 2、用VBScript弹出提示框然后跳转到指定页面     Respons

使用arduino拯救你的arduino开发板(含arduino拯救16u2/8u2的usbserial)_arduino uno编程器选哪个_野生流水线工人的博客-程序员秘密

在arduino开发过程中,我们可能会遇到主控烧坏需要重新更换MCU,或者是自己画基于arduino开发的主板需要自行烧录arduino的bootloader的时候。网上很多教程都会告知如何用可正常使用的arduino开发给丢bootloader的板子刷写bootloader,但是像mega2560、uno这类的板子usb转串口是使用16u2,8u2芯片模拟出来的usbserial,一旦该芯片固件

随便推点

appium---第一个脚本--启动一个已存在的app_baodiaoxe346599的博客-程序员秘密

1、可以使用android-sdk中的aapt工具①、选择一个版本的build_tools,加入path环境变量中②、验证aapt环境是否正常3、下载你要测试的包到本地,放入某一地址中(随意):aapt dump badging D:\Users\4admin\Desktop\jianshu_xpgod.apk(包的位置)然后就可以获得包的所有信息,...

关于华为机试的一点建议_twc829的博客-程序员秘密

年前听师姐说华为三月份有实习招聘,让我好好准备。听说有机试,如果这次不过,等到下半年的校招都会没有资格再报技术岗。为了与自己的目标更进一步,年后回到学校,我每天都在看机试题目。资源一般都是百度上别人总结的,题目确实很多,但很杂,没有条理,最关键的是没法确保别人的代码一定正确。(PS:这里的“正确”指的是能通过华为OJ标准)我练习了一部分机试题目,最后也只是验证了程序对于样例数据的正

【计算机视觉】图像全景拼接 RANSAC_Q蕾的博客-程序员秘密

1、全景图像拼接原理1.1 RANSAC算法原理RANSAC是“RANdom SAmple Consensus(随机抽样一致)”的缩写。它可以从一组包含“局外点”的观测数据集中,通过迭代方式估计数学模型的参数。它是一种不确定的算法——它有一定的概率得出一个合理的结果;为了提高概率必须提高迭代次数。RANSAC的基本假设是:(1)数据由“局内点”组成,例如:数据的分布可以用一些模型参...

单选radio和多选checkbox标签的值js,jq已经赋值了,提交时没有值,解决方案_橙-极纪元的博客-程序员秘密

正文单选radio和多选checkbox标签的值js,jq已经赋值了,提交时没有值,解决方案html 代码&lt;input type="radio" name="IsEnd" id="IsEndTrue" lay-filter="IsEnd" value="true" title="是"&gt;&lt;input type="radio" name="IsEnd" id="IsEndFalse" lay-filter="IsEnd" value="false" title="否" ..

swing登陆程序的异步loading效果,避免程序假死状态?swing 后台操作事件长 , 界面如何显示正在等待?_swing 多线程loading_cnq2328的博客-程序员秘密

swing登陆程序的异步loading效果,避免程序假死状态?swing 后台操作事件长 , 界面如何显示正在等待?在Java 桌面应用程序开发中,使用swing 进行后台操作时,如果后台执行反应时间长,那么界面就会出现假死状态,swing 给出了这样的解决方案:SwingWorker, 它就是解决后台操作时间过长,导致用户体验差这个事情的。对于一般不是超长时间处理,但又不想卡住界面的

Failed to execute goal com.spotify:docker-maven-plugin:1.0.0:build (default-cli) on project_bug程序猿的博客-程序员秘密

项目场景:springboot项目集成docker插件进行打包问题描述:springboot项目集成docker用插件打包时报错:Failed to execute goal com.spotify:docker-maven-plugin:1.0.0:build (default-cli) on project fset-admin: Exception caught原因分析:发现是因为在docker build之前没有进行maven打包,找不到jar包所导致的解决方案:在mvn dock