elasticsearch5.3.0安装以及与基于jupyter notebook 的spark交互_Xmo_jiao的博客-CSDN博客


本站和网页 https://blog.csdn.net/xmo_jiao/article/details/73251937 的作者无关,不对其内容负责。快照谨为网络故障时之索引,不代表被搜索网站的即时页面。

elasticsearch5.3.0安装以及与基于jupyter notebook 的spark交互_Xmo_jiao的博客-CSDN博客
elasticsearch5.3.0安装以及与基于jupyter notebook 的spark交互
Xmo_jiao
于 2017-06-14 20:29:36 发布
4759
收藏
分类专栏:
elasticsearch
文章标签:
notebook
elasticsearch
spark
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/xmo_jiao/article/details/73251937
版权
elasticsearch
专栏收录该内容
1 篇文章
0 订阅
订阅专栏
安装elasticsearch
1.1 下载并解压Elasticsearch 5.3.0
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.tar.gz
tar -zxvf elasticsearch-5.3.0.tar.gz
1.2 下载ES-hadoop jar包,安装unzip并解压,将ES-spark jar拷贝到spark/jars目录
若机器上scala版本为2.11,则需要配套拷贝elasticsearch-spark-20_2.11-5.3.0.jar; 若机器上scala版本为2.10,则需要配套拷贝elasticsearch-spark-20_2.10-5.3.0.jar;可以同时拷贝到spark/jars文件夹,但是缺少对应的jar,会影响对spark中ES的使用,比如无法show spark dataframe
wget http://download.elastic.co/hadoop/elasticsearch-hadoop-5.3.0.zip
apt-get install unzip
unzip elasticsearch-hadoop-5.3.0.zip
cd elasticsearch-hadoop-5.3.0/dist
cp elasticsearch-spark-20_2.11-5.3.0.jar SPARK_HOME/jars/
cp elasticsearch-spark-20_2.10-5.3.0.jar SPARK_HOME/jars/
1.3 创建普通用户es,启动ES(默认不能用root启动)
给用户组es操作elasticsaerch目录的权限.切换到用户es,使用-d 永久启动elasticsaerch.查看启动状态
groupadd es
useradd es -g es -p es
passwd es
sudo chown -R es:es elasticsearch-5.3.0
chmod 777 root
su es
cd elasticsearch-5.3.0/bin
./elasticsearch -d
ps -ef | grep elastic
1.4 查看ES状态
1 浏览器直接输入网址
http://localhost:9200
2 curl命令行查询
curl -XGET http://localhost:9200
3 ES启动状态查询
ps -ef|grep elastic
如果未启动,可查看es日志
cd ../logs
vim elasticsearch.log
1.5 ES常用命令操作
1>.检查集群健康,我们将使用_cat API。需要事先记住的是,我们的节点HTTP的端口是9200:
curl 'localhost:9200/_cat/health?v'
2>节集群中的节点列表:
curl 'localhost:9200/_cat/nodes?v'
3> 创建一个叫做“customer”的索引,然后再列出所有的索引:
curl -XPUT 'localhost:9200/customer?pretty'
curl 'localhost:9200/_cat/indices?v'
4>删除刚刚创建的索引,并再次列出所有的索引:
curl -XDELETE 'localhost:9200/customer?pretty'
curl 'localhost:9200/_cat/indices?v'
1.6 ES可能遇到的问题
java版本问题,使用Elasticsearch5.0 必须安装jdk1.8
[elsearch@vm-mysteel-dc-search01 bin]$ java -version
java version "1.8.0_112"
Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)
[elsearch@vm-mysteel-dc-search01 bin]$
运行es,会提示错误,类似一些version error的错误。JDK版本若不是8,可能会出现ES启动不起来的问题。linux 自带的opensdk7 也最好不要使用。 Elasticsearch依赖Java 8,在你安装Elasticsearch之前,你可以通过以下命令来检查你的Java版本(如果有需要,安装或者升级): java –version
2.不能使用root用户启动,can not run elasticsearch as root
切换到非root用户 因为安全问题elasticsearch 不让用root用户直接运行,所以要创建新用户
[2017-01-17T21:54:48,798][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch. ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch. ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand. ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command. ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.cli.Command.main(Command. ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch. ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch. ~[elasticsearch-5.1.2.jar:5.1.2]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap. ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap. ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap. ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch. ~[elasticsearch-5.1.2.jar:5.1.2]
... 6 more
3.ES启动错误
[2017-01-12T15:55:55,433][INFO ][o.e.b.BootstrapCheck ] [SfD5sIh] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks ERROR: bootstrap checks failed max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
● 临时提高vm.max_map_count的大小,此操作需要root权限: sudo sysctl -w vm.max_map_count=262144 sysctl -a|grep vm.max_map_count ● 永久修改vm.max_map_count: 解决:切换到root用户修改配置sysctl.conf vi /etc/sysctl.conf 添加下面配置: vm.max_map_count=655360 并执行命令: sysctl -p 然后,重新启动elasticsearch,即可启动成功。
4.ES启动报错
2017-01-12T16:12:22,404][INFO ][o.e.b.BootstrapCheck ] [SfD5sIh] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks ERROR: bootstrap checks failed initial heap size [536870912] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap
解决方法: vi config/jvm.options //-Xms 和 -Xmx需要配置的相等,不然无法启动成功。 -Xms1024m -Xmx1024m
1.7 ES重启,kill
使用命令kill杀掉服务器的ES进程即可
1.查找ES进程 ps -ef | grep elastic
2.杀掉ES进程 kill -9 2382(2382为进PID进程号)
3.重启ES sh elasticsearch -d
4.交互命令
pyspark 和elasticsaerch交互的一个应用
具体操作请上一篇博客链接:基于pyspark 和scala spark的jupyter notebook 安装
2.1 安装基于jupyter notebook的spark,即安装kernel 为pyspark的notebook
1.安装并启动spark
2.安装anaconda2,则jupyter notebook和python2.7安装成功
3.设置bashrc文件,使启动spark/bin目录下pyspark,自动启动jupyter.即可以通过启动 pyspark 来启动jupyter Notebook ,就可以在jupyter Notebook 中使用 pyspark
vim ~/.bashrc
#添加export
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
source ~/.bashrc
4.启动pyspark (链接jupyter,需要设置.bashrc)
./pyspark
Note: notebook 后台持续启动
把启动命令放到日志中
nohup jupyter notebook > notebook.log &
Note: 其他方式启动连接 ES-spark
./pyspark --driver-memory 4g --driver-class-path ../jars/elasticsearch-spark-20_2.11-5.3.0.jar
./spark-shell --driver-memory 2g --driver-class-path ../jars/elasticsearch-spark-20_2.11-5.3.0.jar
2.2 设置spark和elasticsearch,使spark连接外部elasticsearch
spark 默认连接localhost的和elasticsearch,如果需要非本机的spark集群连接elasticsearch,需要设置ES config文件的默认localhost为本机的外部访问ip,同样也需要设置spark的config文件,使之默认连接固定IP的ES
cd /root/elasticsearch-5.3.0/config/
vim elasticsearch.yml
//在elasticsearch.yml中添加以下内容
network.host: 9.30.166.20
http.port: 9200
cd /root/spark/conf/
cp spark-defaults.conf.template spark-defaults.conf
vim spark-defaults.conf
//在spark-defaults.conf中添加以下内容
spark.es.nodes 9.30.166.20
spark.es.port 9200
2.3 测试:通过spark-shell命令行操作,对目前的spark和elasticsearch进行交互测试
目前已经启动spark和elasticsearch,安装好有相应kernel的jypyter,也已经把需要的ES-spark包放入spark安装目录下的jars文件夹
1.启动spark-shell
cd /root/spark/bin
./spark-shell
2.复制以下代码到spark-shell中
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.elasticsearch.spark._
val conf =new SparkConf().setAppName("Recommand").setMaster("spark://zhuoling1.fyre.ibm.com:7077")
//val conf = spark.conf
conf.set("es.index.auto.create","true")
conf.set("es.nodes","127.0.0.1")
val numbers=Map("one"->1,"two"->2)
val airports=Map("OPT"->"Otopeni","SFO"->"San Hran")
//al sc=new SparkContext(conf)
val aa=sc.makeRDD(Seq(numbers,airports))
aa.saveToEs("spark/docs")
3.查询es节点 spark/docs上是否有上述代码save的文档,查看ES节点上的索引
curl 'localhost:9200/_cat/nodes?v'
2.4 安装tmdb 和elasticsearch 的python API,安装python names包
pip install elasticsearch
2.5 基于spark和elasticsearcha交互的yelp推荐系统
基于美国yelp APP的商户推荐 数据开源,来源为yelp官网
#Add libraries
from pyspark.ml.feature import StringIndexer, VectorAssembler
from pyspark.ml import Pipeline
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.recommendation import ALS
from pyspark.sql.functions import col
from pyspark.sql.types import *
from elasticsearch import Elasticsearch
es = Elasticsearch()
esIndex = "yelpindex"
esDocType = "yelp"
#Create data schema in ES.This step only need execute one time for a data set because it is used to initial data schema in ES.
def initIndex():
create_index = {
"settings": {
"analysis": {
"analyzer": {
"payload_analyzer": {
"type": "custom",
"tokenizer":"whitespace",
"filter":"delimited_payload_filter"
},
"mappings": {
"yelp": {
"properties": {
"text": {
"type": "text"
},
"userId": {
"type": "integer",
"index": "not_analyzed"
},
"itemId": {
"type": "integer",
"index": "not_analyzed"
},
"stars": {
"type": "double"
},
"is_open": {
"type": "double"
},
"@model": {
"properties": {
"factor": {
"type": "text",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "keyword"
# create index with the settings & mappings above
es.indices.create(index=esIndex, body=create_index)
print "ES index(%s) create success" % esIndex
# Prepare data and indexed in ES server.3.1 Load original data from files
#This step only used for initial dataset in ES, it only need execute one time.
yelp_review = spark.read.json("/root/yelp/yelp_academic_dataset_review.json")\
.select("business_id","stars","user_id","text")
yelp_business = spark.read.json("/root/yelp/yelp_academic_dataset_business.json")\
.select("business_id","name","address","city","categories","is_open")
yelp_review.show(5)
yelp_business.show(5)
#Join the review and business data so that we can get all the rates for each business
#We use the business which is open in Las Vegas. The dataset include so many data and the python can not deal such dataset once. So we only use business in Las Vegas to reduce the effort of data prepare.
yelp_data = yelp_review.join(yelp_business, yelp_review.business_id == yelp_business.business_id)\
.select(yelp_review.business_id, yelp_review.stars, yelp_review.user_id, yelp_review.text,\
yelp_business.name,yelp_business.address,yelp_business.city,yelp_business.categories,yelp_business.is_open)\
.filter(yelp_business.city == 'Las Vegas').filter( yelp_business.is_open == 1)
num_yelp = yelp_data.count()
yelp_data.show(5)
print yelp_data.count()
print yelp_business.count()
#3.3 Split dataset to training data and bacup data
yelp_test, yelp_backup = yelp_data.randomSplit([0.1,0.9])
num_test=yelp_test.count()
print "The number of training data is ",num_test
#3.4 Create integer id for business and user
#The original business id and user id is a string but ES need integer business id and user id to improve performance. So we use pipeline transformer to create a seqence for business id and user id.
businessIndexer = StringIndexer(inputCol="business_id",outputCol="itemId")
userIndexer = StringIndexer(inputCol="user_id",outputCol="userId")
pipeline = Pipeline(stages=[businessIndexer, userIndexer])
review_df=pipeline.fit(yelp_test).transform(yelp_test).select("text","name","address","city","categories","is_open",col("userId").cast(IntegerType()), col("itemId").cast(IntegerType()),col("stars").cast(DoubleType()))
review_df.show(5)
review_df.printSchema()
#3.5 Insert and index data into ES
data = review_df.collect()
i = 0
for row in data:
yelp = {
"itemId": row.itemId,
"id": row.itemId,
"name": row.name,
"address": row.address,
"city":row.city,
"categories": row.categories,
"userId":row.userId,
"text": row.text,
"stars": row.stars,
"is_open": row.is_open
es.index(esIndex, esDocType, id=yelp['itemId'], body=yelp)
i += 1
if i % 5000 == 0: print "Indexed %s items of %s" % (i, num_test)
#Create model and save model into ES
#4.1 Load data from es used to create model
yelp_df = spark.read.format("es").option("es.read.field.as.array.include", "categories").load(esIndex+"/"+esDocType)
yelp_df.printSchema()
yelp_df.show(5)
#4.2 Train ALS model
als = ALS(userCol="userId", itemCol="itemId", ratingCol="stars", regParam=0.1, rank=10, seed=42)
model = als.fit(yelp_df)
model.userFactors.show(10)
model.itemFactors.show(10)
#4.3 Convert model data inorder to save it into ES.
from pyspark.sql.types import *
from pyspark.sql.functions import udf, lit
def convert_vector(x):
'''Convert a list or numpy array to delimited token filter format'''
return " ".join(["%s|%s" % (i, v) for i, v in enumerate(x)])
def reverse_convert(s):
'''Convert a delimited token filter format string back to list format'''
return [float(f.split("|")[1]) for f in s.split(" ")]
def vector_to_struct(x, version):
'''Convert a vector to a SparkSQL Struct with string-format vector and version fields'''
return (convert_vector(x), version)
vector_struct = udf(vector_to_struct, \
StructType([StructField("factor", StringType(), True), \
StructField("version", StringType(), True)]))
#Show model data formate
# test out the vector conversion function
test_vec = model.itemFactors.select("features").first().features
print test_vec
print
print convert_vector(test_vec)
#4.4 Save model into ES
ver = model.uid
item_vectors = model.itemFactors.select("id", vector_struct("features", lit(ver)).alias("@model"))
# write data to ES, use:
# - "id" as the column to map to ES yelp id
# - "update" write mode for ES
# - "append" write mode for Spark
item_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save(esIndex+"/"+esDocType, mode="append")
#Search one data to check if model save success
es.search(index=esIndex, doc_type=esDocType, q="Target", size=1)
#5. Search similar business from ES
#5.1 Search similar business from ES using cosine
def fn_query(query_vec, q="*", cosine=False):
return {
"query": {
"function_score": {
"query" : {
"query_string": {
"query": q
},
"script_score": {
"script": {
"inline": "payload_vector_score",
"lang": "native",
"params": {
"field": "@model.factor",
"vector": query_vec,
"cosine" : cosine
},
"boost_mode": "replace"
def get_similar(the_id, q="*", num=10, index=esIndex, dt=esDocType):
response = es.get(index=index, doc_type=dt, id=the_id)
src = response['_source']
if '@model' in src and 'factor' in src['@model']:
raw_vec = src['@model']['factor']
# our script actually uses the list form for the query vector and handles conversion internally
query_vec = reverse_convert(raw_vec)
q = fn_query(query_vec, q=q, cosine=True)
results = es.search(index, dt, body=q)
hits = results['hits']['hits']
return src, hits[1:num+1]
def yelp_similar(the_id, q="*", num=10, index=esIndex, dt=esDocType):
bussiness, recs = get_similar(the_id, q, num, index, dt)
# display query
print "Business: ",bussiness['id']
print "Business Name: ",bussiness['name']
print "Address: ",bussiness['address']
print "Category: ",bussiness['categories']
print "***************************"
print "Similar Business List:"
i = 0
for rec in recs:
i+=1
r_score = rec['_score']
r_source=rec['_source']
business_id = r_source['id']
city=r_source['city']
name=r_source['name']
text=r_source['text']
userId=r_source['userId']
stars=rec['_source']['stars']
address=rec['_source']['address']
categories=rec['_source']['categories']
print "==================================="
print "No %s:"%i
print "Score: ", r_score
print "Business ID: %s"%business_id#r_im_url)
print "City: ", city
print "Name: ", name
print "Address: ", address
print "Category: ", categories
print "UserId: ", userId
print "Stars: ", stars
print "User Comment: "
print "----------"
print text
#5.2 Search similar business from ES.
#We can find the top 10 similar business from ES for a business ID. User also can use other App to get the result by ES for the model have been saved.
yelp_similar(188)
yelp_similar(2391)
参考网址:
1.官方资料地址: ● 如何安装执行elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/current/zip-targz.html ● 如何配置elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html 2.相关内容全的博客: http://blog.sina.com.cn/s/blog_c90ce4e001032f7w.html http://blog.csdn.net/jklfjsdj79hiofo/article/details/72355167
Xmo_jiao
关注
关注
点赞
收藏
打赏
评论
elasticsearch5.3.0安装以及与基于jupyter notebook 的spark交互
1.安装elasticsearch1.1 下载并解压Elasticsearch 5.3.0wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.tar.gztar -zxvf elasticsearch-5.3.0.tar.gz1.2 下载ES-hadoop jar包,安装unzip并解压,将E
复制链接
扫一扫
专栏目录
reporting-notebook:Jupyter Notebook从Elasticsearch数据创建报告
04-10
报告笔记本
Jupyter Notebook从Elasticsearch数据创建报告
基于 RDD 的分布式数据处理实验(pyspark)
最新发布
m0_46526335的博客
10-10
409
ubuntu环境下安装anaconda,jupyter notebook与spark连接并实现交互,并基于恐怖袭击数据集通过RDD实现数据分析及可视化;最后附上standalone和yarn的两种任务提交方式的方法。
参与评论
您还未登录,请先
登录
后发表或查看评论
3.Spark 学习成果转化—机器学习—使用Spark MLlib的逻辑回归来预测音乐标签 (多元分类问题)
南国猫觅海的博客
09-28
618
本文目录如下:第3例 使用Spark ML的逻辑回归来预测音乐标签3.1 数据准备3.1.1 数据集文件准备2.1.2 数据集字段解释2.2 使用 Spark ML 实现代码2.2.1 引入项目依赖2.2.2 将 `MNIST` 数据集以 `libsvm` 格式进行加载并解析2.2.3 准备训练和测试集2.2.4 运行训练算法来创建模型2.2.5 在测试上计算原始分数2.2.6 为模型评估初始化一个多类度量2.2.7 构造混淆矩阵2.2.8 总体统计信息2.2.9 项目完整代码
第3例 使用Spark ML
Elasticsearch:使用 Jupyter Notebook 创建 Python 应用导入 CSV 文件
Elastic 中国社区官方博客
05-20
3132
在开发 Python 应用时,经常会使用到 Jupyter 来完成 Python 应用的开发及调试。简而言之,Jupyter Notebook 是以网页的形式打开,可以在网页页面中直接编写代码和运行代码,代码的运行结果也会直接在代码块下显示。如在编程过程中需要编写说明文档,可在同一个页面中直接编写,便于作及时的说明和解释。在今天的文章中,我将使用 Jupyter 来进行展示。
今天我就一个简单的例子来进行展示如何使用 Python 语言导入一个 CSV 文件 。这个 CSV 的文件很简单,但是我们通过这.
解决报错ERROR: [1] bootstrap checks failed
【企图撼树的蚍蜉】的专栏
09-27
438
参见: [Elasticsearch 安装步骤](onenote:…/Development tools/ELK.one#Elasticsearch 安装步骤§ion-id={0E2CCBBC-F796-4778-BABA-0213CD567F69}&page-id={0FDDD046-C541-44F9-9D4E-7A6DD91FB089}&base-path=https://d.docs.live.net/ace0a806cd3c9a6f/文档)
pyspark 读写es
weixin_45621200的博客
09-02
431
pyspark 读写es
elasticsearch使用笔记
weixin_30723433的博客
11-17
162
安装流程
http://www.elasticsearch.org/overview/elkdownloads/下载对应系统的安装包(我下载的是tar的),下载解压以后运行es根目录下bin目录的elasticsearch命令(无需配置直接启动),启动后如果能看到类似于下面情况的大量的“INFO”信息。
(O_O)~/software/elasticsearch/elasticse...
Elasticsearch5进阶-设置(4)启动检查(Bootstrap Checks)
06-19
317
在Elasticsearch启动时,会对重要的配置进行检查,在不同的模式下,es会进行不同的提示:
开发模式下es将错误信息打印到日志中(warnning)
在生产环境下,es会直接启动报错,启动不了!
开发模式 vs. 生产模式
默认情况下,Elasticsearch是绑定到本机(net...
docker启动elasticsearch报错
xdkprosperous的博客
09-19
562
docker run -dit --name dk-es -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS="-Xms512m -Xms512m" xiadekang/dk-es:6.8.2
处理启动报错一:
[2017-01-12T15:55:55,433][INFO ][o.e.b.BootstrapCheck ] [SfD5sIh] boun...
pyspark 与es的交互
maketubu7的博客
05-25
1238
# Author:Dengwenxing
# -*- coding: utf-8 -*-
# @Time :2019/12/30 15:09
# @Site :
# @fILE : esReader.py
# @Software :
import sys, os
from pyspark import SparkConf
from pyspark.sql import SparkSession
from pyspark.sql import functions as fun
fr.
在jupyter实现数据的可视化
07-12
3720
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
df=pd.read_csv('D:\order.csv',enco...
Elasticsearch:运用 Python 来实现对搜索结果的分页
Elastic 中国社区官方博客
05-23
1409
在今天的文章中,我将展示如何使用 Python 语言来针对搜索结果进行分页处理。我将使用 Jupyter 来进行展示。在我之前的文章 “Elasticsearch:使用 Jupyter Notebook 创建 Python 应用导入 CSV 文件” 中,我展示了如何使用 Jup...
Jupyter
架构师的成长之路的博客
07-03
4846
Jupyter Notebook 基础文本单元格使用 markdown 语法基础。可以参考这里代码单元格:In [8]:1 + 2
Out[8]:3In [1]:for _ in range(5):
print("Hello, Machine Learning")
Hello, Machine Learning
Hello, Machine Learning
Hello, Machine ...
记录Linux下安装elasticSearch时遇到的一些错误
热门推荐
AndyLizh的专栏
01-24
4万+
1、外网访问9200端口
系统centos7.0安装elasticsearch后本机可以访问127.0.0.1:9200,但不能访问【公网IP:9200】如何解决?
修改配置文件 config/elasticsearch.yml
network.host: 0.0.0.0
http.port: 9200
本人安装ElasticSearch的步骤
关于jvm内存分配的问题heap size [268435456] not equal to maximum heap size [2147483648],需要修改的jvm配置
anlu的博客
12-12
1万+
*此操作需要root权限
[root@localhost ~]# sysctl -w vm.max_map_count=262144
查看修改结果
[root@localhost ~]# sysctl -a|grep vm.max_map_count
vm.max_map_count = 262144
或者永久性修改
[root@localho
ELK
jzj_xhj的专栏
01-04
108
之前的安装好的elk,在用了一段时间,在该机器上安装了别的东西,导致内存分配出现问题,启动elasticsearch如果报如下错误。修改方案入下
ERROR: [1] bootstrap checks failed
[1]: initial heap size [536870912] not equal to maximum heap size [2051014656]; this can...
Elasticsearch集群安装部署过程中遇到的问题
马超的博客
10-24
1408
一、问题:
[2017-01-12T15:55:55,433][INFO ][o.e.b.BootstrapCheck ] [SfD5sIh] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
ERROR: bootstrap checks failed
m...
Spark操作ElasticSearch
tangshiweibbs的专栏
04-20
3875
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster("local").setAppName("ScalaSparkElasticSearch")
/**
* 根据es官网的描述,集成需要设置:
* es.index.auto.create--->true
spark将数据写入ES(ElasticSearch)终极总结
阿华田的博客
01-09
3万+
简介
spark接入ES可以使用多种方式,常见类型如下。
将Map对象写入ElasticSearch
将case class 类对象写入ElasticSearch
将Json的字符串写入ElasticSearch
本文主要介绍将case class 类对象写入ElasticSearch:也就是获取数据然后使用case class封装数据,然后在case class中选取一个字段当做 id,...
pyspark读取es
wzj_wp的博客
10-14
981
方式一:sqlcontext
def readEs():
conf = SparkConf().setAppName("es").setMaster("local[2]")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
df = sqlContext.read.format("org.elasticsearch.spark.sql") \
.option("es.nodes.wan.only"
elasticsearch 报错 ERROR: [2] bootstrap checks failed system call filters failed to install; check th
巨魔战将
11-16
3812
[2018-11-16T13:53:00,839][WARN ][o.e.b.JNANatives ] unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP an...
“相关推荐”对你有帮助么?
非常没帮助
没帮助
一般
有帮助
非常有帮助
提交
©️2022 CSDN
皮肤主题:大白
设计师:CSDN官方博客
返回首页
Xmo_jiao
CSDN认证博客专家
CSDN认证企业博客
码龄6年
暂无认证
12
原创
24万+
周排名
190万+
总排名
7万+
访问
等级
863
积分
69
粉丝
31
获赞
164
评论
68
收藏
私信
关注
热门文章
图像语义分割:从头开始训练deeplab v2系列之二【VOC2012数据集】
21478
基于pyspark 和scala spark的jupyter notebook 安装
11302
图像语义分割:从头开始训练deeplab v2系列之一【源码解析】
10506
图像语义分割:从头开始训练deeplab v2系列之四【nyu v2数据集】
7872
图像语义分割:从头开始训练deeplab v2系列之三【pascal-context数据集】
6508
分类专栏
图像分割
4篇
hadoop
1篇
hbase
1篇
spark
1篇
elasticsearch
1篇
docker
1篇
linux
2篇
caffe
ipython-notebook-使用
1篇
xx-net
最新评论
图像语义分割:从头开始训练deeplab v2系列之三【pascal-context数据集】
ALEX的日常:
one-hot编码
java 从hadoop hdfs读取文件 进行groupby并显示为条形图
mo_槑:
2021年的课程还是这个数据,哈哈哈
图像语义分割:从头开始训练deeplab v2系列之一【源码解析】
qq_42331243:
唉,博主,matio咋装呢,make check的时候报错太多了呀,make[2]: Entering directory '/data6/liuzd/deeplab/deeplab-public-ver2/matio-1.5.20/test'
autom4te --language=Autotest -I '.' -I './tests' testsuite.at -o testsuite.tmp
make[2]: autom4te: Command not found
Makefile:1025: recipe for target 'testsuite' failed
make[2]: *** [testsuite] Error 127
make[2]: Leaving directory '/data6/liuzd/deeplab/deeplab-public-ver2/matio-1.5.20/test'
Makefile:890: recipe for target 'check-am' failed
make[1]: *** [check-am] Error 2
make[1]: Leaving directory '/data6/liuzd/deeplab/deeplab-public-ver2/matio-1.5.20/test'
Makefile:517: recipe for target 'check-recursive' failed
make: *** [check-recursive] Error 1
(deeplab1) liuzd@Ti-Two:/data6/liuzd/deeplab/deeplab-public-ver2/matio-1.5.20$
博主捞捞我捞捞我!!!!!!!!
图像语义分割:从头开始训练deeplab v2系列之二【VOC2012数据集】
qq_42331243:
请问您知道原因了吗???我现在也碰到这个问题了
图像语义分割:从头开始训练deeplab v2系列之二【VOC2012数据集】
qq_42331243:
啊,博主,我运行covert_labels.py文件的时候出现这个问题:
python convert_labels.py /data6/liuzd/deeplab/DL_dataset/VOC12_orig/SegmentationClass /data6/liuzd/deeplab/DL_dataset/VOC12_orig/ImageSets/Segmentation/trainval.txt /data6/liuzd/deeplab/DL_dataset/VOC12_orig/SegmentationClass_1D
Traceback (most recent call last):
File "convert_labels.py", line 64, in <module>
main()
File "convert_labels.py", line 25, in main
img_name = os.path.join(path, img_base_name) + ext
File "/home/liuzd/anaconda3/envs/deeplab1/lib/python3.6/posixpath.py", line 94, in join
genericpath._check_arg_types('join', a, *p)
File "/home/liuzd/anaconda3/envs/deeplab1/lib/python3.6/genericpath.py", line 151, in _check_arg_types
raise TypeError("Can't mix strings and bytes in path components") from None
TypeError: Can't mix strings and bytes in path components
不知道该咋搞了,博主捞捞我!!!!!!!
您愿意向朋友推荐“博客详情页”吗?
强烈不推荐
不推荐
一般般
推荐
强烈推荐
提交
最新文章
图像语义分割:从头开始训练deeplab v2系列之四【nyu v2数据集】
图像语义分割:从头开始训练deeplab v2系列之三【pascal-context数据集】
linux增加新硬盘 修改分区、挂载
2017年12篇
目录
目录
分类专栏
图像分割
4篇
hadoop
1篇
hbase
1篇
spark
1篇
elasticsearch
1篇
docker
1篇
linux
2篇
caffe
ipython-notebook-使用
1篇
xx-net
目录
评论
被折叠的 条评论
为什么被折叠?
到【灌水乐园】发言
查看更多评论
打赏作者
Xmo_jiao
你的鼓励将是我创作的最大动力
¥2
¥4
¥6
¥10
¥20
输入1-500的整数
余额支付
(余额:-- )
扫码支付
扫码支付:¥2
获取中
扫码支付
您的余额不足,请更换扫码支付或充值
打赏作者
实付元
使用余额支付
点击重新获取
扫码支付
钱包余额
抵扣说明:
1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。 2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。
余额充值