I have the issue with MySQL server.
I installed a Xampp server on top of an older one based on some other instructions. The old one was in my C drive (c:/Xampp) and the new one I put on my desktop C:/Users/user/Desktop/xampp.
I managed to fix the Apache server by changing the port number but I am still having issues with SQL Server.
Tried the following:
Step 1: Search for ['client'], you can see some thing like this
[client]
# password = your_password
port = 3306
socket = "C:/xampp/mysql/mysql.sock"
Now in the port section remove 3306 and add port = 3306 > 3307 as shown below.
[client]
# password = your_password
port = 3306 > 3307
socket = "C:/xampp/mysql/mysql.sock"
Step 2: similarly searched for ['mysqld'], you can see something like this
[mysqld]
port= 3306
socket = "C:/xampp/mysql/mysql.sock"
basedir = "C:/xampp/mysql"
tmpdir = "C:/xampp/tmp"
datadir = "C:/xampp/mysql/data"
pid_file = "mysql.pid"
# enable-named-pipe
key_buffer = 16M
max_allowed_packet = 1M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
log_error = "mysql_error.log"
Now here I changed the port number 3306 to 3307 and add a line innodb_force_recovery = 1 exactly as shown below:
[mysqld]
port= 3307
socket = "C:/xampp/mysql/mysql.sock"
basedir = "C:/xampp/mysql"
tmpdir = "C:/xampp/tmp"
datadir = "C:/xampp/mysql/data"
pid_file = "mysql.pid"
# enable-named-pipe
key_buffer = 16M
max_allowed_packet = 1M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
log_error = "mysql_error.log"
innodb_force_recovery = 1**
Did not work.
Any suggestions? Thanks.
Related
I have installed successfully a gitlab-runner on a VM, and it is used by some of my projects. I would like to use the Interactive Web Terminal to have a chance to debug when some pipeline fails.
I'm trying to configure my config.toml file, following this docu of GitLab but I'm not understanding which ip address I should use in the setting listen_address. Should it be the ip of the running machine? The docker container instance? Or what?
Here is my current configuration:
concurrent = 2
check_interval = 0
log_level = "panic"
[session_server]
listen_address = "0.0.0.0:8093" # listen on all available interfaces on port 8093
session_timeout = 1800
[[runners]]
name = "A test private repo"
url = "https://gitlab.com/"
token = "myToken"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.custom]
run_exec = ""
Screen of error I get
I noticed that when I hit the 0.0.0.0:8093 address on the machine where the gitlab-runner is running I get this response:
Your configuration should use:
[session_server]
session_timeout = 1800
listen_address = "0.0.0.0:8093"
advertise_address = "<your runner IP/hostname>:8093"
Should it be the ip of the running machine?
Yes
Actually I have 2 questions, my first question is : How to make HDFS close file (example .123456789.tmp ) after the entire file was flushed by flume agent.
In fact, the file never closed until, I force flume agent to stop.
I beleive there is a method using the 4 parameters as follow:
hdfs.rollSize = 0
hdfs.rollCount =0
hdfs.rollInterval = 0
hdfs.batchsize = 1000000
Well, my second question is, my agent flume receives files from SFTP server, while I need to keep each file name in hdfs. It works fine with spooldir type, but not with SFTP !! is there a any ideas ?
My configuration file for flume agent as follow:
agent.sources = r1
agent.channels = c1
agent.sinks = k
configure ftp source
agent.sources.r1.type = org.keedio.flume.source.mra.source.Source
agent.sources.r1.client.source = sftp
agent.sources.r1.name.server = ip
agent.sources.r1.user = user
agent.sources.r1.password = secret
agent.sources.r1.port = 22
agent.sources.r1.knownHosts = ~/.ssh/known_hosts
agent.sources.r1.work.dir = /DATA/test/flumrFTP
agent.sources.r1.fileHeader = true
agent.sources.r1.basenameHeader = true
agent.sources.r1.inputCharset = ISO-8859-1
#agent.sources.r1.batchSize = 1000
agent.sources.r1.flushlines = true
configure sink s1
agent.sinks.k.type = hdfs
agent.sinks.k.hdfs.path = hdfs://hostname:8000/user/admin/DATA/import_flume/
agent.sinks.k.hdfs.filePrefix = %{basename}
agent.sinks.k.hdfs.rollCount = 0
agent.sinks.k.hdfs.rollInterval = 0
agent.sinks.k.hdfs.rollSize = 0
agent.sinks.k.hdfs.useLocalTimeStamp = true
agent.sinks.k.hdfs.batchsize = 1000000
agent.sinks.k.hdfs.fileType = DataStream
Use a channel which buffers events in memory
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000000
agent.channels.c1.transactionCapacity = 1000000
agent.sources.r1.channels = c1
agent.sinks.k.channel = c1
Try setting the variable
hdfs.rollInterval It's the number of seconds to wait before rolling current file
This setting closes the file after the number of seconds you set. I set mine at 200 seconds and I am loading smaller files
Below is my flume config file. Even after the changing the rollInterval and rollSize only 10 events is getting written also the console shows rollCount=10 and events=10. Also I tried increasing the rollCount to 1000 but no change in output. Can anyone suggest to increase the file size being written in hdfs. Whats wrong with the below conf file?
#naming components
NetAgent.sources = NetCat_1 NetCat_2
NetAgent.sinks = HDFS
NetAgent.channels = MemChannel
NetAgent.sources.NetCat_1.type = netcat
NetAgent.sources.NetCat_1.bind = localhost
NetAgent.sources.NetCat_1.port = 8671
NetAgent.sources.NetCat_2.type = netcat
NetAgent.sources.NetCat_2.bind = localhost
NetAgent.sources.NetCat_2.port = 8672
NetAgent.sinks.HDFS.type = hdfs
NetAgent.sinks.HDFS.hdfs.path = file path here
NetAgent.sinks.HDFS.hdfs.filePrefix = test
NetAgent.sinks.HDFS.hdfs.rollSize = 67108864
NetAgent.sinks.HDFS.hdfs.rollInterval = 3600
NetAgent.sinks.HDFS.rollCount = 0
NetAgent.sinks.HDFS.hdfs.batchSize = 10000
NetAgent.sinks.HDFS.hdfs.writeFormat = Text
NetAgent.sinks.HDFS.hdfs.fileType = DataStream
NetAgent.channels.MemChannel.type = memory
NetAgent.channels.MemChannel.capacity = 20000
NetAgent.channels.MemChannel.transactionCapacity = 20000
NetAgent.sources.NetCat_1.channels = MemChannel
NetAgent.sources.NetCat_2.channels = MemChannel
NetAgent.sinks.HDFS.channel = MemChannel
The console logs as
(SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUg-org.apache.flume.sink.hdfs.BucketWriter.shouldRotate(BucketWriter.java)]
rolling: rollCount: 10, events: 10
the image shows the files written in HDFS
You forgot to add hdfs to your rollCount configuration. It is using the default value of 10 because it doesn't see your configuration. Notice that your config for HDFS is:
NetAgent.sinks.HDFS.type = hdfs
NetAgent.sinks.HDFS.hdfs.rollSize = 67108864
NetAgent.sinks.HDFS.hdfs.rollInterval = 3600
NetAgent.sinks.HDFS.rollCount = 0
NetAgent.sinks.HDFS.hdfs.batchSize = 10000
NetAgent.sinks.HDFS.hdfs.writeFormat = Text
NetAgent.sinks.HDFS.hdfs.fileType = DataStream
In the rollCount line, it needs to be:
NetAgent.sinks.HDFS.hdfs.rollCount = 0
This will override the default rollCount and your Flume agent will behave how you want it to.
I am using ganglia to monitor Hadoop. gmond and gmetad are running fine. When I telnet on gmond port (8649) and when I telnet gmetad on its xml answer port, I get no hadoop data. How can it be ?
cluster {
name = "my cluster"
owner = "Master"
latlong = "unspecified"
url = "unspecified"
}
host {
location = localhost
}
udp_send_channel {
#bind_hostname = yes
#mcast_join = 239.2.11.71
host = localhost
port = 8649
ttl = 1
}
udp_recv_channel {
#mcast_join = 239.2.11.71
port = 8649
retry_bind = true
# Size of the UDP buffer. If you are handling lots of metrics you really
# should bump it up to e.g. 10MB or even higher.
# buffer = 10485760
}
tcp_accept_channel {
port = 8649
# If you want to gzip XML output
gzip_output = no
}
I find out the issue. It was related to the hadoop metrics properties. I set up ganglia in the hadoop-metrics.properties but I had to set up hadoop-metrics.properties config file. Now ganglia throws correct metrics.
I have been searching high and low and still cannot get debugging working with 'eclipse for PHP Developers 3.0.2'.
At the moment eclipse is just hanging at 57% with 'Launching: waiting for XDebug session. But while eclipse is hanging, the php file opens in an external browser and runs???
I'm using 'XAMPP 3.1.0.3.1.0' for the web server and have the appropriate 'php_xdebug.dll' file in the php ext folder.
I have tried numerous setting from other forums but still no luck, here is my php.ini file config for XDebug:
[XDebug]
zend_extension = "C:\xampp\php\ext\php_xdebug.dllstack"
;xdebug.profiler_append = 0
;xdebug.profiler_enable = 1
;xdebug.profiler_enable_trigger = 0
;xdebug.profiler_output_dir = "\xampp\tmp"
;xdebug.profiler_output_name = "cachegrind.out.%t-%s"
xdebug.remote_enable = 0n
xdebug.remote_handler = "dbgp"
xdebug.remote_host = "127.0.0.1"
;xdebug.trace_output_dir = "\xampp\tmp"
Anyone have an idea to what I need to change?
Seems like the configuration setting were not correct, good tool to use is http://xdebug.org/wizard.php.
Downloaded new version, added it the php/ext and updated php.ini:
[XDebug]
zend_extension = \xampp\php\ext\php_xdebug-2.2.2-5.4-vc9.dll
;zend_extension = "\xampp\php\ext\php_xdebug.dll"
;xdebug.profiler_append = 0
;xdebug.profiler_enable = 1
;xdebug.profiler_enable_trigger = 0
;xdebug.profiler_output_dir = "\xampp\tmp"
;xdebug.profiler_output_name = "cachegrind.out.%t-%s"
;xdebug.remote_enable = 0
;xdebug.remote_handler = "dbgp"
;xdebug.remote_host = "127.0.0.1"
;xdebug.trace_output_dir = "\xampp\tmp"