技术标签: 短视频 NN deep learning neural network 云计算
这是我对transformers库查看了原始文档后,进行的学习总结。
第一部分是将如何调用加载本地模型,使用模型,修改模型,保存模型
之后还会更新如何使用自定义的数据集训练以及对模型进行微调,感觉这样这个库基本就能玩熟了。
# 加载本地模型须知
* 1.使用transformers库加载预训练模型,99%的时间都是用于模型的下载。
为此,我直接从清华大学软件("https://mirrors.tuna.tsinghua.edu.cn/hugging-face-models/")把模型放在了我的本地目录地址:"H:\\code\\Model\\"下,这里可以进行修改。
* 2.下载的模型通常会是"模型名称-"+"config.json"的格式例如(bert-base-cased-finetuned-mrpc-config.json),但如果使用transformers库加载本地模型,需要的是模型路径中是config.json、vocab.txt、pytorch_model.bin、tf_model.h5、tokenizer.json等形式,为此在加载前,需要将把文件前面的模型名称,才能加载成功
我自己写的处理代码如下:
#coding=utf-8
import os
import os.path
# 模型存放路径
rootdir = r"H:\code\Model\bert-large-uncased-whole-word-masking-finetuned-squad"# 指明被遍历的文件夹
for parent,dirnames,filenames in os.walk(rootdir):#三个参数:分别返回1.父目录 2.所有文件夹名字(不含路径) 3.所有文件名字
for filename in filenames:#文件名
# nameList=filename.split('.')
# print(nameList)
print(filename)
# filenew=nameList[0]+'.jpg'
# print(filenew)
#模型的名称
newName=filename.replace('bert-large-uncased-whole-word-masking-finetuned-squad-','')
os.rename(os.path.join(parent,filename),os.path.join(parent,newName))#重命名
处理完后就可以使用transformers库进行代码加载了。
模型使用
序列分类(以情感分类为例)
1.使用管道
model_path="H:\\code\\Model\\bert-base-cased-finetuned-mrpc\\"
from transformers import pipeline
#使用当前模型+使用Tensorflow框架,默认应该是使用PYTORCH框架
nlp = pipeline("sentiment-analysis",model=model_path, tokenizer=model_path, framework="tf")
result = nlp("I hate you")[0]
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
result = nlp("I love you")[0]
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
2.直接使用模型
model_path="H:\\code\\Model\\bert-base-cased-finetuned-mrpc\\"
#pytorch框架
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
classes = ["not paraphrase", "is paraphrase"]
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt")
not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt")
paraphrase_classification_logits = model(**paraphrase).logits
not_paraphrase_classification_logits = model(**not_paraphrase).logits
paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]
not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]
# Should be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
# Should not be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")
#tensorflow框架
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = TFAutoModelForSequenceClassification.from_pretrained(model_path)
classes = ["not paraphrase", "is paraphrase"]
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="tf")
not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="tf")
paraphrase_classification_logits = model(paraphrase)[0]
not_paraphrase_classification_logits = model(not_paraphrase)[0]
paraphrase_results = tf.nn.softmax(paraphrase_classification_logits, axis=1).numpy()[0]
not_paraphrase_results = tf.nn.softmax(not_paraphrase_classification_logits, axis=1).numpy()[0]
# Should be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
# Should not be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")
1.使用管道
model_path="H:\\code\\Model\\bert-large-uncased-whole-word-masking-finetuned-squad\\"
from transformers import pipeline
nlp = pipeline("question-answering",model=model_path, tokenizer=model_path)
context = r"""
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/question-answering/run_squad.py script.
"""
result = nlp(question="What is extractive question answering?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
result = nlp(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
2.直接使用模型
model_path="H:\\code\\Model\\bert-large-uncased-whole-word-masking-finetuned-squad\\"
#使用pytorch框架
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForQuestionAnswering.from_pretrained(model_path)
text = r"""
Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = [
"How many pretrained models are available in Transformers?",
"What does Transformers provide?",
" Transformers provides interoperability between which frameworks?",
]
for question in questions:
inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}")
#使用tensorflow框架
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = TFAutoModelForQuestionAnswering.from_pretrained(model_path)
text = r"""
Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = [
"How many pretrained models are available in Transformers?",
"What does Transformers provide?",
" Transformers provides interoperability between which frameworks?",
]
for question in questions:
inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="tf")
input_ids = inputs["input_ids"].numpy()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
outputs = model(inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
answer_start = tf.argmax(
answer_start_scores, axis=1
).numpy()[0] # Get the most likely beginning of answer with the argmax of the score
answer_end = (
tf.argmax(answer_end_scores, axis=1) + 1
).numpy()[0] # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}")
语言建模
1.使用管道
model_path="H:\\code\\Model\\distilbert-base-cased\\"
from transformers import pipeline
nlp = pipeline("fill-mask",model=model_path, tokenizer=model_path, framework="tf")
from pprint import pprint
pprint(nlp(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks."))
2.直接使用模型
model_path="H:\\code\\Model\\distilbert-base-cased\\"
## 使用pytorch实现
from transformers import AutoModelWithLMHead, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelWithLMHead.from_pretrained(model_path)
sequence = f"Distilled models are smaller than the models they mimic. Using them instead of the large versions would help {tokenizer.mask_token} our carbon footprint."
input = tokenizer.encode(sequence, return_tensors="pt")
mask_token_index = torch.where(input == tokenizer.mask_token_id)[1]
token_logits = model(input).logits
mask_token_logits = token_logits[0, mask_token_index, :]
top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()
for token in top_5_tokens:
print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))
## 使用tensorflow实现
from transformers import TFAutoModelWithLMHead, AutoTokenizer
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = TFAutoModelWithLMHead.from_pretrained(model_path)
sequence = f"Distilled models are smaller than the models they mimic. Using them instead of the large versions would help {tokenizer.mask_token} our carbon footprint."
input = tokenizer.encode(sequence, return_tensors="tf")
mask_token_index = tf.where(input == tokenizer.mask_token_id)[0, 1]
token_logits = model(input)[0]
mask_token_logits = token_logits[0, mask_token_index, :]
top_5_tokens = tf.math.top_k(mask_token_logits, 5).indices.numpy()
for token in top_5_tokens:
print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))
文字生成
1.使用管道
model_path="H:\\code\\Model\\xlnet-base-cased\\"
from transformers import pipeline
text_generator = pipeline("text-generation",model=model_path, tokenizer=model_path, framework="tf")
print(text_generator("As far as I am concerned, I will", max_length=50, do_sample=False))
2.直接使用模型
model_path="H:\\code\\Model\\xlnet-base-cased\\"
#使用pytorch版本
from transformers import AutoModelWithLMHead, AutoTokenizer
model = AutoModelWithLMHead.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Padding text helps XLNet with short prompts - proposed by Aman Rusia in https://github.com/rusiaaman/XLNet-gen#methodology
PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family
(except for Alexei and Maria) are discovered.
The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the
remainder of the story. 1883 Western Siberia,
a young Grigori Rasputin is asked by his father and a group of men to perform magic.
Rasputin has a vision and denounces one of the men as a horse thief. Although his
father initially slaps him for making such an accusation, Rasputin watches as the
man is chased outside and beaten. Twenty years later, Rasputin sees a vision of
the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,
with people, even a bishop, begging for his blessing. <eod> </s> <eos>"""
prompt = "Today the weather is really nice and I am planning on "
inputs = tokenizer.encode(PADDING_TEXT + prompt, add_special_tokens=False, return_tensors="pt")
prompt_length = len(tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
outputs = model.generate(inputs, max_length=250, do_sample=True, top_p=0.95, top_k=60)
generated = prompt + tokenizer.decode(outputs[0])[prompt_length:]
print(generated)
#使用tensorflow版本
from transformers import TFAutoModelWithLMHead, AutoTokenizer
model = TFAutoModelWithLMHead.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Padding text helps XLNet with short prompts - proposed by Aman Rusia in https://github.com/rusiaaman/XLNet-gen#methodology
PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family
(except for Alexei and Maria) are discovered.
The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the
remainder of the story. 1883 Western Siberia,
a young Grigori Rasputin is asked by his father and a group of men to perform magic.
Rasputin has a vision and denounces one of the men as a horse thief. Although his
father initially slaps him for making such an accusation, Rasputin watches as the
man is chased outside and beaten. Twenty years later, Rasputin sees a vision of
the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,
with people, even a bishop, begging for his blessing. <eod> </s> <eos>"""
prompt = "Today the weather is really nice and I am planning on "
inputs = tokenizer.encode(PADDING_TEXT + prompt, add_special_tokens=False, return_tensors="tf")
prompt_length = len(tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
outputs = model.generate(inputs, max_length=250, do_sample=True, top_p=0.95, top_k=60)
generated = prompt + tokenizer.decode(outputs[0])[prompt_length:]
print(generated)
命名实体识别
1.使用管道
model_path="H:\\code\\Model\\dbmdz\\bert-large-cased-finetuned-conll03-english\\"
tokenizer="H:\\code\\Model\\bert-base-cased\\"
from transformers import pipeline
nlp = pipeline("ner",model=model_path, tokenizer=tokenizer_path)
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very"
"close to the Manhattan Bridge which is visible from the window."
print(nlp(sequence))
2.直接使用模型
model_path="H:\\code\\Model\\dbmdz\\bert-large-cased-finetuned-conll03-english\\"
tokenizer_path="H:\\code\\Model\\bert-base-cased\\"
##使用pytorch格式
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
model = AutoModelForTokenClassification.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)
label_list = [
"O", # Outside of a named entity
"B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity
"I-MISC", # Miscellaneous entity
"B-PER", # Beginning of a person's name right after another person's name
"I-PER", # Person's name
"B-ORG", # Beginning of an organisation right after another organisation
"I-ORG", # Organisation
"B-LOC", # Beginning of a location right after another location
"I-LOC" # Location
]
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
"close to the Manhattan Bridge."
# Bit of a hack to get the tokens with the special tokens
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
inputs = tokenizer.encode(sequence, return_tensors="pt")
outputs = model(inputs).logits
predictions = torch.argmax(outputs, dim=2)
print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())])
##使用tensorflow格式
from transformers import TFAutoModelForTokenClassification, AutoTokenizer
import tensorflow as tf
model = TFAutoModelForTokenClassification.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)
label_list = [
"O", # Outside of a named entity
"B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity
"I-MISC", # Miscellaneous entity
"B-PER", # Beginning of a person's name right after another person's name
"I-PER", # Person's name
"B-ORG", # Beginning of an organisation right after another organisation
"I-ORG", # Organisation
"B-LOC", # Beginning of a location right after another location
"I-LOC" # Location
]
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
"close to the Manhattan Bridge."
# Bit of a hack to get the tokens with the special tokens
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
inputs = tokenizer.encode(sequence, return_tensors="tf")
outputs = model(inputs)[0]
predictions = tf.argmax(outputs, axis=2)
print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())])
文本摘要
1.使用管道
model_path="H:\\code\\Model\\t5-base\\"
from transformers import pipeline
summarizer = pipeline("summarization",model=model_path, tokenizer=model_path)
ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
2010 marriage license application, according to court documents.
Prosecutors said the marriages were part of an immigration scam.
On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.
"""
print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False))
2.直接使用模型
model_path="H:\\code\\Model\\t5-base\\"
#使用pytorch框架
from transformers import AutoModelWithLMHead, AutoTokenizer
model = AutoModelWithLMHead.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# T5 uses a max_length of 512 so we cut the article to 512 tokens.
inputs = tokenizer.encode("summarize: " + ARTICLE, return_tensors="pt", max_length=512)
outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)
# print(outputs)
print(tokenizer.decode(outputs[0]))
#使用tensorflow框架
from transformers import TFAutoModelWithLMHead, AutoTokenizer
model = TFAutoModelWithLMHead.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# T5 uses a max_length of 512 so we cut the article to 512 tokens.
inputs = tokenizer.encode("summarize: " + ARTICLE, return_tensors="tf", max_length=512)
outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)
翻译
1.使用管道
model_path="H:\\code\\Model\\t5-base\\"
from transformers import pipeline
translator = pipeline("translation_en_to_de",model=model_path, tokenizer=model_path)
print(translator("Hugging Face is a technology company based in New York and Paris", max_length=40))
2.直接使用模型
模型定制
初始化调整
model_path="H:\\code\\Model\\distilbert-base-uncased\\"
#pytorch方式
from transformers import DistilBertConfig, DistilBertTokenizer, DistilBertForSequenceClassification
config = DistilBertConfig(n_heads=8, dim=512, hidden_dim=4*512)
tokenizer = DistilBertTokenizer.from_pretrained(model_path)
model = DistilBertForSequenceClassification(config)
#tensorflow方式
from transformers import DistilBertConfig, DistilBertTokenizer, TFDistilBertForSequenceClassification
config = DistilBertConfig(n_heads=8, dim=512, hidden_dim=4*512)
tokenizer = DistilBertTokenizer.from_pretrained(model_path)
model = TFDistilBertForSequenceClassification(config)
部分调整
model_name="H:\\code\\Model\\distilbert-base-uncased\\"
#pytorch方式
from transformers import DistilBertConfig, DistilBertTokenizer, DistilBertForSequenceClassification
#model_name = "distilbert-base-uncased"
model = DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=10)
tokenizer = DistilBertTokenizer.from_pretrained(model_name)
#tensorflow方式
from transformers import DistilBertConfig, DistilBertTokenizer, TFDistilBertForSequenceClassification
#model_name = "distilbert-base-uncased"
model = TFDistilBertForSequenceClassification.from_pretrained(model_name, num_labels=10)
tokenizer = DistilBertTokenizer.from_pretrained(model_name)
模型保存与加载
#模型保存
##对模型进行微调后,可以通过以下方式将其与标记器一起保存:
save_directory="./save/"
tokenizer.save_pretrained(save_directory)
model.save_pretrained(save_directory)
#模型加载
##TensorFlow模型中加载已保存的PyTorch模型
tokenizer = AutoTokenizer.from_pretrained(save_directory)
model = TFAutoModel.from_pretrained(save_directory, from_pt=True)
##PyTorch模型中加载已保存的TensorFlow模型
tokenizer = AutoTokenizer.from_pretrained(save_directory)
model = AutoModel.from_pretrained(save_directory, from_tf=True)
首先,在命令行中cd到自己的项目目录下使用:composer config -g repo.packagist composer https://packagist.laravel-china.org切换镜像源使用:composer require jaeger/querylist安装安装成功.注意:自己的PHP版本, 以及对应的QueryList文档(这...
第12章 Shell脚本编程 l Shell命令行的运行l 编写、修改权限和执行Shell程序的步骤l 在Shell程序中使用参数和变量l 表达式比较、循环结构语句和条件结构语句l 在Shell程序中使用函数和调用其他Shell程序12-1 Shell命令行书写规则 u Shell命令行的书写规则...
这些信息在申诉中是非常重要的,比如登陆QQ官网后察看个人信息,就能看到您的昵称,姓名,联系电话等信息,有人说这些信息直接在QQ里就能察看啊,不是这样的,我认为90%的用户都用QQ软件,但很少用自己的QQ登陆官网,记得同步更新么?对,关键问题就在这里,由于您几乎从未登陆官网,官网里的个人信息资料可能就是很早以前的,甚至是您刚刚申请QQ时的资料,申诉时提供的资料可是越早越好的,所以,经常在官网同步
------------------------------------------------------------------------------------------scrapy中文文档 和 scrapy 英文文档参照看。因为中文文档比较老,英文文档是最新的。scrapy 英文文档:https://doc.scrapy.org/en/latestscrapy 中文文档:...
非常可乐Time Limit: 2000/1000 MS (Java/Others)Memory Limit: 32768/32768 K (Java/Others) Total Submission(s): 14153Accepted Submission(s): 5653 Problem Description大家一定觉的运动以后喝可乐是一件很惬意的事情,但是see...
从源代码编译安装 ReText 问题与解决1. 如何安装 Python Markups1.1 从 https://launchpad.net/python-markups 下载 Python Markups1.2 解压缩以后,进入其目录,执行 sudo python setup.py install 即可2. 如何安装 ReText2.1 从 http://sour
(Vries的教程是我看过的最好的可编程管线OpenGL教程,没有之一,其原地址如下,https://learnopengl-cn.github.io/04%20Advanced%20OpenGL/01%20Depth%20testing/ 关于深度测试的详细知识了解请看原教程,本篇旨在对Vires基于visual studio平台的编程思想与c++代码做纯Qt平台的移植,重在记录自身学习之用)...
摘要:头部平台竞争力显现,兴趣教育需求增长强劲:K12在线教育细分赛道包括学习工具、综合网校、1对1辅导、少儿英语、少儿编程、教教O2O、美术教育等,从综合实力来看,好未来、猿辅导、跟谁学、作业帮、新东方在线、掌门1对1、火花思维、爱学习、小盒科技等平台竞争力较强,品牌力及产品护城河逐步形成。除此之外,随着家长对孩子兴趣教育关注度的持续提升,K12兴趣教育领域涌现出一批颇具特色的企业,其中美术宝、编程猫、伴鱼、少年得到、凯叔讲故事等已经具备成为K12在线教育独角兽的潜力。来源:Fastdata极数
20165211 2017-2018-2 《Java程序设计》课程总结一、每周作业及实验报告博客链接汇总预备作业1:我期望的师生关系预备作业2:学习基础和C语言调查预备作业3:Linux安装与学习第一周作业:Java入门及环境搭建第二周作业:基本数据类型与数,运算符、表达式和语句第三周作业:类与对象第四周作业:子类与继承,接口与实现第五...
RFC3920 可扩展的消息和出席信息协议 (XMPP): 核心协议 关于本文的说明 本文为互联网社区定义了一个互联网标准跟踪协议,并且申请讨论协议和提出了改进的建议。请参照“互联网官方协议标准”的最新版本(STD 1)获得这个协议的标准化进程和状态。本文可以不受限制的分发。 版权声明 本文版权属于互联网社区 (C) The Internet Society (2004). 摘要 ...
一、问题当使用ArcMap加载图层,进行数据的频繁操作(放大、缩小、查询),服务器端即报错,弹出对话框 gsrvr.exe程序错误,内存不能为 read 之类的。客户端会出现Network I/O error查看sde错误日志sde-esri-sde.logdb_array_fetch_attrs OCI Fetch Errorload_buffer error -...
整体框架前端: bootstrap: Bootstrap是由twitter主导的一个前端框架项目,也是目前GitHub上Star数最多的项目。个人认为非常适合程序员单枪匹马开发网站,因为bootstrap有很美观的UI,能使开发者全心投入到功能的开发中。而且bootstrap的衍生项目也非常多,从bootstrap主题,到bootstrap组合控件和扩展控件。数以百计的项目,可谓